Apple lossless goes up to 24bits at 192KHz, which is over 4mbps. I believe bluetooth 5.x only supports 2mbps, although I'm not sure.
According to wikipedia, Apple did file a patent in 2019 for high bandwidth low latency audio streaming over bluetooth (up to 8mbps)[0]. Looks like they've been working on this for a while.
I assume that "up to 8mbps" probably means that in reality you can't use much more than 1 mbps because in a crowded space with lots of wifi devices and lots of other people using bluetooth headphones there will be lots of interference and data rates will be forced to go down.
Bluetooth and Wifi already don't properly coexist - one device can transmit a bluetooth packet and another a wifi packet at the same time, and the likelihood is both packets will get clobbered and lost. Proper coexistence would involve a device saying "I want to reserve Frequency bands X, Y and Z for the next 1 millisecond", and then no other devices using those frequency bands for that time. That exists for wifi clients, but it doesn't interoperate with Bluetooth, thread or zigbee.
This is why "audiophile" is a joke term in the audio engineering community.
It is not biologically possible for us (unless one is an alien) to hear above 22kHz. 44.1kHz is enough to record in perfect fidelity [0]. For strictly listening purposes, 192kHz/24bit is wasteful, just extreme overkill because "bigger numbers = more good".
People can't even reliably detect a difference between 320kbps MP3s and source-quality FLAC/ALAC/WAV/PCM. On very good headphone and speaker setups. Good quality MP3/AAC is all one needs for airpods and the bluetooth protocol can easily handle those bitrates.
edit: changed record->playback. Here I'm just discussing playback; for audio production purposes 192kHz/24bit is desireable for several reasons. Once one ships that album/song though, it should be downmixed back to 44.1kHz/16bit.
It's not necessarily just that we can't hear above 20khz - recording above 44.1khz has its merits. For example, if you want to apply digital distortion or do amplitude modulation then up-sampling or having a high SR stops wrap around in the spectrum and getting all those nasty artefacts. The end user should never need to worry about this and it should be downsampled at the master.
You forgot the #1 reason for recording at high SR; using a lower order low-pass filter without affecting the audible range. Sharp digital low-pass filters have far fewer tradeoffs and better SNR than analog filters, so doing the anti-aliasing low-pass filter digitally has big advantages.
The interesting part is in how actually valuable improvements are rare, while useless "improvements" are common.
I've got all music in 44.1kHz or 48kHz FLAC (only on the server, so I can transcode it to ogg opus for mobile playback reducing the space usage without lossy to lossy artifacts). Similar effects apply to many other such cases.
Audiophiles buy 10'000$ golden HDMI cables... which don't even support HDMI 2.0a. They buy gold-plated toslink cables (!)
Transcoding is a fairly common feature, although typically to mp3. I use Navidrome [0] to self-host my own music, and it lets me set up custom transcoding profiles with ffmpeg.
KDE's Amarok (semi-dead project) can automatically transcode audio files if you copy them to e.g. a phone, Jellyfin (an Emby fork, which is a Plex clone) can automatically transcode files on the fly during playback.
I've simply written a small python utility to handle everything automatically.
Once I learned the Nyquist frequency rule existed in my EE classes I became incredibly suspicious of anybody who threw the words "lossless" around. Glad to know that this suspicion was well-founded.
Aside, "Nyquist criterion" would be a sick band name.
While 192KHz is overkill, there are plenty of instruments that produce non-trivial amounts of energy above 22KHz. While we're unable to hear them some can be generally felt (depending on volume and proximity). A pair of headphones won't reproduce those ultrasonic energies but good speakers can. Even so, this is only germane in a handful of types of music with high fidelity recording and mastering.
The same for higher resolution samples, an audio pipeline using that available dynamic range can do cool stuff.
This is somewhat true. Also, while human beings cannot hear a dog whistle per se, due to the way the human ear works they can hear the beats produced by the combination of that sound with another, straightforwardly audible-frequency sound. Per Nørgård exploits this in his Symphony No. 5. But examples of this actually affecting enjoyment or even an A/B test in practice are so few they shouldn't really guide one's encoding choices, CD audio is already sufficiently wide in practice and claims that we need more are audiophile wishful thinking.
I'm definitely not arguing for the stupid "audiophile" frequencies. For the tiny fraction of music taking advantage of ultrasonic frequencies and high dynamic range it's nice there's an option now.
I hope the availability of high resolution, or more importantly the spacial audio, will inspire some artists to really explore the space. I'm hoping for some stuff like the Flaming Lips' Zaireeka or Yoshimi album's 5.1 mix.
But does sampling above the Nyquist rate guarantee phase reconstruction though? In a real world (not theoretical ideal) implementation of a reconstruction filter, what’s the phase reproduction ability the nearer you get to the Nyquist rate? I thought that there was no phase reconstruction guarantee by the Shannon reconstruction theorem, only frequency, and that there can be advantages to sampling rates higher than than Nyquist alone would suggest. Can anyone with more authority on this subject speak to this? It’s a bit fuzzy but I remember my DSP professor making that argument once and it always kind of stuck with me. It does make some intuitive sense, as you picture trying to reconstruct a sinusoid close to (but below) the Nyquist rate. It’s not hard to see how a reconstruction filter may get some odd results like incorrect amplitude, oscillating amplitude, or phase shift the closer you get.
> found that a function called memcpy was the culprit, most memory players use memcpy and this is one of the reasons why memory play sounds worse ie digital sounding. Fortunately there is an optimised version of memcpy from http://www.agner.org/optimize/, using this version removes the hard edge produced by memcpy.
The trouble with this threads is they straddle a gaping chasm between complete delusion, or amazing trolling.
Just imagine if stuff was like this though and computing were more 'analogue' -- each of those 15 layers of JavaScript transpiling just degraded the quality of the end result slightly.
Really weird things happen with half-knowledge and fixed ideas built on it. If you "know" it'll sound different, it will sound different to you. Combine that with actual, unrelated variations and maybe a bug that fits the pattern at some point...
This shit always gets me. I don't care how much you spend on electrostatic speakers, granite slabs, acoustic treatments, and magic speaker wire — your room will never, ever sound as good as a decent pair of headphones.
That said there absolutely is a case for high-bitrate, losslessly-compressed, DRM-free, watermark-free audio as the standard, and that is sampling and remixing. Slowing down a 320 kbps mp3 by just 50% sounds like shit.
Some of the people concerned about room response are listening to surround-sound classical recordings. In works that involve a spatial element -- for example an orchestra in front and players that move around the hall (or are embedded within the audience), creating a 360° soundstage -- headphones just don't preserve that as well as actual speakers.
Headphones with a proper HRTF preserve that perfectly fine better than any actual speaker setup ever can. The typical demo for this functionality is https://www.youtube.com/watch?v=IUDTlvagjJA
Recordings have to be specially recorded and processed for that. There are however many older recordings that were made without the use of such technology and are expected to be played on speakers in 5.0.
You can apply the same virtually as well: simulate a raytraced environment with 5.0 speakers and furniture, simulate an HRTF, and generate the resulting 2.0 audio.
It's possible to do this in a way that's not in any way distinguishable from a real 5.0 setup
I don’t think it is realistic to expect an ordinary home listener of some old SACD 5.0 recording to carry out some elaborate simulation involving the specific furniture in his home, just to listen to something on headphones.
The user themselves not, but if you have some apple homepods and airpods max, they can build the same model and actually do that (and that's part of their spatial audio).
Dolby provides a similar system, but with a generic room simulation, as do several others.
I support getting the sampling rate and quality up. To what level? Idk.
Have you ever been in a room with a DJ playing low quality music on a decent sound system? It's awful, grating. Now at least your everyday aspiring DJ has access to higher bitrate tunes and doesn't need to blast low quality junk out of the PA system.
While I can understand that bluetooth limits the data transmission rate, now it becomes even more puzzling that the AirPods Max don't support data transmission via lightning cable. With that they could fully support lossless and high res audio in full quality. How can Apple release such expensive, some people even used the word "overpriced" headphones without that features, especially with the new tiers of Apple Music coming?
For what it’s worth, I’ve got the Bose QC35 and regardless of whatever the battery claims are, I often get multiple working days out of them.
They also have a pretty light clamping force compared to most over ear headphones I have or have used, so even with glasses they’re quite comfortable to wear all day.
And if the batteries do run out, they still work as headphones (sans ANC) with a cable.
The QC20 kinda seem like the worst of all worlds for me. Wired so you’ve got cables to catch on things, battery-dependant, in-ear…
YMMV as they say. My work laptop has pretty bad background noise on the headphone out, audible even with a cheap audio-conferencing setup while doing nothing. If I plug in an external display and start giving the CPU a workout, it becomes torture.
Bluetooth on the same computer is silent, and the ausio quality seems limited by the headphones (Sony WH-1000 with LDAC). I know some computers and phones have good analog audio quality, but the point is it's not universal, especially when using an "enterprise" computer which you can't choose.
I'm impressed you can wear earbuds all day. I can't even wear headphones all day, let alone earbuds. Earbuds for me max out at about one hour at a time before they get too uncomfortable.
I find it depends. I wear glasses and have fairly big ears [0], so headphones become painful fairly quickly.
Some earbuds are uncomfortable, I can't stand the standard iPhone ones, and the Samsung's' wire makes a lot of noise when touching something ever so slightly.
I have a pair of intra-articulars with foam tips (like the ones you buy for studying) and they're very, very comfortable. I can wear them for hours on end. The tips come in several sizes, so it's important to pick the right one.
The only issue is that they're not easy to put on: I have to squish the foam, put them on, then hold them in place a bit until the foam expands. In an office setting, if I have to do this often it gets old really quickly. Noise isolation is great, too, so I can't leave them on if I have to talk to someone.
---
[0] I've tried "around ear" phones, but they don't go around enough, and end up pressing on the outer ear, which, with the glasses, gets painful.
I have the 425s, which is the model below, but only the insides are different, so I can answer your question.
Sound isolation is very good, but it depends on the tips being the right size for your ear canal. Some only swear by the silicone ones, others by the foams. Note that it's a lot like wearing earplugs, so you'll hear your steps if you have harder soles, etc.
Personally, I've always used the foams, I found them more comfortable and good enough for my needs. The silicone ones wouldn't hold well or were too uncomfortable.
However, when it was time to change them, I bought a set of Comply tips and isolation was much better. They're a bit longer, so you shouldn't buy too many at first before you check whether they bother you. They're also somewhat more rigid, so you have to hold them in a bit longer while they expand, so they're not very practical if you need to put take them out and put them back in often.
Ha, yeah, my intras are actually Shures, too, the 425.
Bought mine around 2010-2011, I remember it being a pretty big purchase as a student. Probably one of the best investments I've ever made, I still absolutely love using them.
Thanks will check these out. Maybe the ANC will be good enough?
> ANC is certainly not Bose/Sony quality
I wish Bose would just make a USB-C powered version of their QC20. I don't understand why I have to rely on battery powered ANC when it's already a wired device.
I wouldn't have high hopes though. Getting ANC to work in consumer earbuds(not earphones) is a tough nut to crack. Even the big players like Huawei, Sennheiser and Samsung struggle with it. They seem to only work in a narrow set of cases like being next to an AC unit or a food blender but get stuck at day to day sounds like cars, trains, crowds of people talking, dishes and cutlery clinking, doors slamming, etc. Sometimes I feel like they even amplify these sounds instead.
I think the real question is why AirPods Max have received tons of praise for their unmatched audio quality when they were released, and now suddenly are talked down to second tier because they don’t support lossless?
I mean, surely all those audiophile testers were aware that they are listening to compressed music? Or is this a recent realisation?
The praise that I heard was about the quality of the transparency mode and the integration with Apple's ecosystem. On the reviews that I saw the reviewers were very careful not to claim unmatched audio quality.
To be honest, I don't think that this is a big limitation in practice. The AirPods max do sound excellent and I do think that "lossless" audio is a bit overrated, a high quality AAC file can sound quite amazing. So personally, I am more interested in the spatial audio which is fully supported. But considering that some people think that lossless audio is important it is not understandable that the Apple flagship headphones which just have been released do not support it.
Yes, for wireless mode bluetooth is probably the limitation. They could have supported data via lightning cable, if they had wanted it, but currently only analog audio is supported via a custom lightning cable.
That custom lightning cable is 3.5mm on the other end, so it’s not surprising that it’s analog-only. What is apparently surprising to some people is that the headphones re-digitize the analog signal before processing the signal (and eventually converting back to analog for the actual headphone drivers). Personally, that isn’t too surprising to me for noise-cancelling headphones, although I have no idea if it’s the case with other noise-canceling headphones that accept analog input.
Some wireless headphones works passively with 3.5mm without power on. In my experience, MDR-1ABT works great maybe because it's just BT version of MDR-1A, but WH-1000XM3 and Beoplay H9i (both NC) work bad maybe because it's not optimized for it. I wonder there are any NC headphones works great for both.
Not when the headphones re-digitize the analog signal, which I suspect is how they are designed to support the signal processing and noise cancellation features.
But as soon as signal processing and noise cancellation enter the picture, there will be signal loss. So even if lossless could be transmitted wirelessly, there will be loss due to signal processing.
At most what could be gained is one less ad/da conversion to go through.
The the question presents itself, why not buy wired analog headphones in the first place.
I think the audiophile testers are usually more concerned about the sound-stage than they are about the compression considering the market the Airpod Max's are aimed at - particularly for Bluetooth Headphones.
To add on top of that, they own the full stack, they can (as they already did in the past) use their 100% proprietary way to achieve this. They don't obey to common standards.
Apparently lossless streaming cannot be supported on the newest bluetooth spec, and will probably require some kind of wired headphones for the near future.
If this is accurate then Bluetooth continues to disappoint.
Bluetooth reminds me a lot of the F-35: Designed to do everything, does none of it particularly well, hard to build/maintain, and frankly would have been better off as three or four competing standards that did their "thing" well and nothing else.
Might be time to retire it, do a clean sheet audio-only standard, that is easy enough to implement/re-implement with actual tests for all the functionality.
> Bluetooth reminds me a lot of the F-35: Designed to do everything, does none of it particularly well, hard to build/maintain, and frankly would have been better off as three or four competing standards that did their "thing" well and nothing else.
Hard disagree. Bluetooth has an important disadvantage, in that its quality is bad. But it's also got an important advantage: it works, at all.
Basically anything made from early 00s onwards will be able to connect to a modern pair of Bluetooth headphones. Interop is the selling feature.
That's true of 'classic' bluetooth, but not of LE. LE is legitimately good. Source: Implemented a couple of LE devices. There's no shared heritage between LE and Classic, just the branding in common.
Oh yes, LE is a completely different can of worms (and a horrible one at that). I was referring to classic Bluetooth, since most LE headsets also support it as a connectivity mode.
btw I hope I didn’t come across snarky. I agree BLE would be better with more bandwidth, I think part of the reason Bluetooth is the F-35 of protocols is it tried to be everything to everyone and it’s a hard line to draw. IMO audio should be in the BLE camp but beyond audio I think wifi is a better fit. JMHO.
I don't think there are many devices that can create ad-hoc connections between them using WiFI. Maybe they should, but are there many devices that can do this?
I.e. I connect my Windows laptop to my Android phone directly through WiFI, in a very simple manner.
I’m mainly using Sony and Jabra gears with iPhone, when I turn them on they reconnect, no issues, except some headsets tend to cut off when the virtual line defined by phone and headset runs below surface of my torso for more than couple inches long.
I would have agreed with you prior to Bluetooth 4.0, the LE spec.
LE is a cleanroom specification developed by Nokia and delivered wholesale to the Bluetooth SIG in 2010. It's much more limited in scope, limited in bandwidth and designed to fill a niche around low-power, low-bandwidth devices instead of F-35'ing with WiFi-type capabilities.
It'll get faster in time - each new revision brings with it a faster PHY without compromising on power consumption. It's up to about 2Mbits/sec now, as of 5.0. It's not quite up to where it needs to be to support streaming high-fidelity lossless audio, but it'll be there shortly. It's the right tool for the job - for once - we just need to let it mature a bit.
[edit] Double-checking how AirPods connect, but I'm almost confident it's analogous to LE Audio spec in 5.1
This is purely a spec issue. At first there was the abysmal Headset Profile with audio at 8kHz (SCO Codec). Maybe that was the quality of telephony generations ago, but I find it barely intelligible (and definitely not useable).
They then followed it up with the Hands Free Profile using a version of the SBC codec at 16kHz. They call it „high quality“ but it still sucks and is barely usable (that companies who are even pushing proprietary codecs for Bluetooth audio haven’t really attempted to fix this [FastStream is a one-way solution] is mind boggling to me).
Maybe, just maybe, LE Audio will not actually suck for audio recording. Maybe.
Yes, even the 5.0 LE PHY is 2Mbits/sec and I believe that's shared between upstream and downstream. We're talking a total of 175kB/sec, split up and down [1]. I doubt that's sufficient for what you're suggesting.
The claim that the F-35 is inferior to single role aircraft like the F-22 is also unfounded. None of us knows the specifics about the two airplanes but we do know that the F-35 has superior weapon systems, durability, communications, and spatial awareness.
There are planned upgrades to make the F-22 compatible with the max off boresight capability of the AIM-9X, with its future LOAL capabilities, and there are plans to coat it with more durable radar reflective "paint". The F-35 has all of those today.
>But the most important proof that the F-35 is a good airplane is that there are so many customers lining up to buy it.
You lost me there in an otherwise fine comment. The reason is at least as much that the big bully in the schoolyard wants you to buy his product and most don't dare not to. In many countries that did buy it there have been complaints from the other bidders on the contract that the contact was made specifically so no one else could win so they couldn't even bid properly (and lots of evidence says they are absolutely correct in stating this). This doesn't happen if the product was as good as you claim. Then it would have just won fairly. Personally I haven't heard of a single fair win but I don't follow it that close.
The theory that allied countries purchased the F-35 because they were bullied by the USA into doing so doesn't fit the fact that allied countries refuse to purchase American airplanes all the time.
France makes its own combat airplanes, some of which like the Mirage 2000 and the Rafale are in operation.
Sweden chose to make the Gripen, the UK, Germany, Spain, and Italy chose to make the Eurofighter Typhoon, and Japan, well that's a weaker argument because their native fighter is based on an American jet fighter.
And those were the manufacturers that were shut out. Besides, I didn't say the US forced anyone. Being scared and being forced are close but not identical.
Edit:
Having some knowledge about how the US operates in Europe I am not surprised. I have personally seen lots of examples of exercises in Europe where foreign nations participate and behave very well with the host. I have only ever seen the US navy and the US army just show up using radio frequencies they damn well feel like, unlike every other nation ever. Everyone else reserve frequencies beforehand. The US not so much, no matter what they might interfere with. The US is a bully that behaves like it damn well feels like.
The AirPods Max digitize wired input with a 48kHz sample rate for processing, so even using the ($29!) cable won't deliver any sort of benefit from this.
"Lossless" just means there is no (lossy perceptual) compression involved. You could absolutely have a 48khz lossless signal, or even 44.1khz if you were dealing with a CD rip, and it would likely still measure better than the lossy equivalent audio sent over AAC or aptX via Bluetooth.
I'll skip the whole debate about how good perceptual lossy encodings actually are these days and whether any person could reasonably expect to hear that measured difference, but just wanted to clarify that the sample rate of the headphones' built-in ADC shouldn't be conflated with the lossless codec.
Well, yeah. There are two parts to this, the lossless compression and the "HiFi" sample rate (it goes up to 24-bit 192kHz). I was just pointing out that you don't get the high sample rate even over analog.
I haven't seen anywhere whether AirPods can do 192kHz AAC over Bluetooth, which in my opinion would be more interesting. I assume not if all of the DSP stuff is done at 48kHz.
That would be inconvenient. However there's a lot of portable-player-with-regular-wired-headphones which might suite you. Even comes in the form of a smartphone.
Sure, but take it a bit less literally and you have wireless vs. wired heaphones. Same thing: wired headphones will just play anything you throw at them. Meanwhile wireless ones cost more, have a more limited bandwidth, sometimes require an app for settings, and are built to fail after a few years thanks to proprietary batteries.
We have the tech to solve all these issues. But that wouldn't be as profitable.
I'm not saying it's a cartel. But you can't tell me manufacturers stopped making batteries swappable in notebooks, phones, headphones and many other devices for much other than planned obsolescence. Of course they always make up some other excuse, but it's very obvious e.g. the Surface Laptop wasn't made impossible to repair for technical reasons.
It's plain as day why certain design decisions are made, and calling it a tradeoff might not be wrong, but it's clearly too kind to these practices.
I’m unfamiliar with the Surface but the unusual shape of the MacBook batteries is intentional so that it can fill almost all room. This does make it hard to also make it user serviceable.
No, it doesn't. The shape of the battery has absolutely nothing to do with how hard it is to swap. The amount of glue, restrictions on replacement parts and special screws does.
Is it? Companies have removed the wired option, forgetting that current wireless audio tech is abysmal in many aspects - latency, bandwidth, codecs, Bluetooth being an awful spec. Yet in their book it was comparable enough to replace wired completely.
Does Bluetooth support binary transfer? Why can’t it be just a dumb wireless data transfer protocol. Instead you have profiles and lossy compression. Turn on a mic and the quality drops to phone quality 30 years ago.
Source? The only option that I'm aware of that exists in Bluetooth spec is an optional add-on that basically initiates an 802.11 (WiFi) ad-hoc link for faster transfer. Not really suitable for audio.
Max bandwidth for regular Bluetooth is around 3 Mbit/s, to my knowledge. Bluetooth LE is lower at 2 Mbit/s.
Coincidentally (or not?) That's the bitrate of uncompressed CDDA. I suspect the real-word data rate is lower though.
In any event 300kbps is more than sufficient for transparent compression of audio and, if the source is lossless, then you shouldn't have recompression artifacts. Doing so in real-time does have costs (low latency compression uses both more CPU time and ends up with higher bitrates than when you allow more latency)
There are „just data“ profiles. The issue is nobody has actually thought up a protocol for it.
A funny thing I‘d do if I had more time and energy to dedicate to the topic of Bluetooth telephony (HSP and HFP are garbage for arbitrary reasons) would be opening a network and doing reasonable quality telephony over it to demonstrate it‘s possible.
I think the tradeoff is often quality:latency. The higher the quality, the bigger the buffer, and the longer the latency?
In anecdotal data, I noticed that my Bluetooth headphones had latency so bad they were not usable for gaming (CS:GO, Valorant). Switching the codec to aptX reduced latency noticable but also dramatically reduced quality to somewhere similar to a phone call (flat, noisy).
Is Apple really concerned with the latency, though?
Playing audio on a Homepod used to have huge latency, we're talking about 2 seconds lag, and it still had the sales. Sure thing, they have improved this with Airplay 2 but nonetheless... I'm not sure how big of the concern latency is for the music streaming (not for the video, calls or games).
No. True Lossless Audio streaming could work wine on BT, even newest LE has excess bw to support it (BT 5 - 2Mbit/s). What it cant support is losslessly streaming Ultrasonic audio intended for Bats (and batshit crazy audiophools).
Why does the lossless playback need to be streaming though? This is the fundamental mistake in the analysis of why the AirPods max lack the ability to support this service offering.
The headphones should have gigs of buffer and be able to manage lossless with a great UX to support this.
People are already used to buffer waits for streaming video and this is already a niche use case.
It isn’t ideal but this isn’t for average playback.
Assuming even mediocre speeds, an average album in ALAC is likely around 350 MB. That’s about 32 minutes at 1.4Mbps.
They could start listening around the 17 minute mark and remain uninterrupted.
I suspect most APM owners would happily load the music to the headphones in advance to perform this playback. This seems so obvious to me, that I suspect it is held back by licensing issues more than cost reasons.
The playing device would need to do a good job with the interface but this is not a big ask—-people have been buffering for media they want since RealAudio.
AirPods are headphones, not a music player. A device that you have to "load music" onto isn't a set of headphones anymore -- and the features you're describing would add significantly to the cost of an already expensive device.
Apple took our headphone jacks, then gave us lossless audio. How generous. My 6s is starting to show its wear, unfortunately.
I don't expect it, but I truly hope the next iPhone has a headphone jack. People want airpods whether or not their phones have headphone jacks, it was insecure to take it away in the first place. Have some faith in your products.
On the plus side, I often get to pick the music in my friends' cars.
I have a couple of those, and they work fine. The only downside is you can't use wired charging and wired headphones at the same time unless you use a third-party splitter, none of which have great reviews.
The MagSafe charger kind of solves this, since it's at least easy to pick up with the charger attached, but MagSafe chargers are more expensive than Lightning cables, and the official one has a very short cord.
In my dream world, the iPhone would have two Lightning (or USB-C) ports, but I don't expect that to ever happen.
Even if the phones still had a headphone jack, how good is the tiny integrated DAC that's driving it? I really doubt the difference would be perceivable
Apple products consistently use excellent built-in DACs. Search "audio" on the following page for detailed analysis and measurements of several different Apple devices. https://www.kenrockwell.com/apple/index.htm
> People want airpods whether or not their phones have headphone jacks
Are you suggesting that Apple removed the jack in order to sell AirPods? Seems a bit cynical and doesn't even make sense. I think they removed it for engineering and design reasons. If you need it back there's an official adapter that costs just a few dollars.
Are people surprised by the idea that a company might make design decisions based on their entire product catalog and long-term marketing strategies?
In my mind this isn't even a conspiracy theory, it's just what you'd expect from a competent company. If nobody in the decision-making process to remove the headphone jack asked how it would affect long-term AirPod sales, then that just seems like a pretty big oversight on their part, doesn't it?
Spotify bought podcasts so it tie them to a DRM-encumbered platform with ads, Facebook bought Oculus so it could monetize the data, consoles remove disk drives in part to drive digital purchases and cut down on game reselling, iMessage is iPhone exclusive because it makes it harder to switch to Android, and Apple cares about influencing long-term consumer trends in the headphone market. I don't necessarily agree that acknowledging this stuff counts as being cynical.
I'm not sure what you mean, that doesn't seem like a contradiction to me. Apple sells an adapter that needs to be separately purchased (which is inconvenient), they no longer provide that adapter for new phones. They provided it initially because the market demanded a stopgap solution. Once the market stopped demanding that solution, Apple stopped providing it.
This is how basically every business works. Facebook didn't merge Occulus accounts with Facebook accounts the same year that they bought the company. Epic has been outright giving away games to chip at Steam's market share, but they're not planning to do that indefinitely. They have plans about what they want the PC market to look like in the future. Companies are capable of thinking long-term. Apple is definitely capable of thinking long-term, it's a business run by very smart people.
Do you believe that the iPhone has not driven Bluetooth adoption at all? I think that would be a pretty bold claim to make. Apple has positioned the lack of a headphone jack as a signature mark of a premium phone. Other phone manufacturers have jumped on board with that consumer perception, which has driven investment into making more Bluetooth compatible devices, which in turn makes AirPods more convenient and easier to use with a wider variety of devices. Because wireless audio technology is still evolving (unlike static wired standards), Apple now has an opportunity to make a set of devices that work best with their own products, encouraging more consumer lock-in to the Apple ecosystem. This also positions Apple to differentiate their headphones from competitors in ways that would be difficult to do with an analog connection (offering seamless paring, higher bitrates, etc...).
It's not like Apple doesn't have a history with this, they have openly (proudly) removed features in order to push industries in preferred directions. See CDs over floppies, USB/Thunderbolt over CDs, HTML5 over Flash, etc... That's not a conspiracy, that's literally Apple's stated reason for those changes. And of course Apple thinks about how changes in those industries and markets will affect the products they sell. They'd be foolish not to.
What is it that makes you think that wireless headphones are different then the last 3 or 4 times that Apple has done this? This just seems like very obvious business practice to me, almost every company tries to manipulate markets and demand to some degree.
Perhaps cynical, perhaps a baseless conspiracy. I'll own up to that. But I don't think it's such a massive stretch, nor would I put it past Apple to do such a thing.
I don’t think it’s too cynical. In light of recent emails released in the epic lawsuit, it’s clear Apple makes decisions based on profit over functionality. (iMessage for android). People want to believe they’re altruistic but they are a business after all.
I think the fact that they sell a simple and compact adapter which restores the functionality for less than ten dollars makes me think it cannot be to sell Bluetooth headsets.
HN rules state that I should take the most charitable reading, however, my first thought after reading your comment was "Is this satire?"
Restoring previous functionality for "less than ten dollars" would certainly fit into a nefarious scheme for selling Bluetooth headsets - those who do not want bluetooth headsets are still forced to buy something: buying a less expensive item is still buying. Self-cannibalization is not an old trope.
> those who do not want bluetooth headsets are still forced to buy something
But they even included them in the box for several years - they didn't charge extra for them. They wouldn't do that if they were trying to force people to buy Bluetooth.
Now years later and after most people moved to Bluetooth voluntarily they're not needed and would be wasteful to include.
> They wouldn't do that if they were trying to force people to buy Bluetooth.
Apple removed a headphone jack and provided a stopgap solution in the form of a dongle that was easy to lose and that provided a noticeably worse experience than a normal jack (the inability to charge and listen at the same time).
Once consumers started to come into the Bluetooth ecosystem, Apple raised the gates behind them and stopped providing that stopgap solution. This is pretty normal. For example, if you want to change your privacy policy, make an optional change at first, and then shift it to be opt-out, and then finally make it mandatory.
When you introduce a new policy or change to your products, use the optional nature of that change to shut down complaints. Even though they're not strictly required to, at least some people will migrate (after all, some people were buying Bluetooth headphones before the removal of the jack), and you can use that as evidence to support introducing further changes that make it harder and more annoying not to migrate. Eventually, the numbers will look good enough that you can stop pretending it's a choice at all.
And how does that work out in practice? Well, Apple now controls ~70% of the totally wireless headphone market, and AirPods are arguably one of their most important product categories.
> they're not needed and would be wasteful to include
This is a level of charity towards business press releases that I just don't really know how to connect with. I don't mean this as a gotcha question or anything, but do you believe Apple when they say the reason new iPhones are shipping without chargers is to help the environment? I thought we all took it as a given that all of the companies saying stuff like this were full of crap.
>I think the fact that they sell a simple and compact adapter which restores the functionality
It does not restore the functionality. You cannot use the wired headphones via the adapter, and simultaneously charge the phone. The adapter was sold on the tagline of 'courage' by instilling the myth that 3.5mm jacks were verboten in the ecosystem. To suggest it was an altruistic act, is absurd. It was a calculated bet, like everything else they do, e.g. iPhones have yet to feature USB-C.
The immediate explosion of the wireless market after the removal of the jack, and its timing alongside the concurrent announcement of AirPods, that is your answer.
But if most will just take it to an Apple store to fix it (because they can’t identify a software botch from a hardware botch), they could move the port internally so only Apple authorized access to the port would be provided without voiding the warranty. Now where does the adapter go?
You absolutely can - the iPhone actually has a demo on it you can try moving your head around and the difference is extremely clear - it's not some audiophile thing.
But... no one is talking about spatial audio. This entire thread is about lossless quality. Your comment doesn't even make sense out of context, the AirPods Pro and Max already support spatial audio.
> But... no one is talking about spatial audio. This entire thread is about lossless quality.
If you read past the title to at least as far as the first sentence, it's a wider announcement.
> Apple has announced that it's adding 'Lossless' and 'Hi-Resolution Lossless' streaming options to Apple Music in June 2021 for no extra charge, as well as offering Dolby Atmos 'Spatial Audio' 3D music, too.
Having wireless headphones:
Cons:
- Battery to charge
- Battery that is reduced after a few years
- Pairing with devices issues
- Lower quality of the sound due to the Bluetooth recompression
Pros: Cable doesn't get snagged when person sitting next to you gets up on the subway, ripping them out of my ears. I can connect them to my phone, tablet, tv easily and move around the house without the device.
There's perfectly valid reason to have wireless headphones.
Even around the house wireless can be an advantage. I don't mind being tethered when I'm at my desk, but I'm not always at my desk and wires running to a phone can be a real nuisance, getting snagged on drawer handles and tempting pet chewing.
I feel like most of the "must be wired" brigade check two boxes
1. Too stubborn to move on from their favorite pair of enthusiast headphones
2. Haven't actually _used_ modern wireless headphones much, if at all.
I love a pair of good Sennheisers, or Audeze, or whatever the new $2000 hotness is these days when I'm sitting at my desk, but when I'm on the go? Give me a pair o f bluetooth headphones 100% of the time. The convenience is beyond worth it IMO.
I, uh, just own both types? Actually 90% of the time the reason I use wireless buds is because I want to wear just one bud without stuff dangling, like when I'm cooking or walking my dog and I want to be aware of my surroundings.
You can't seriously list battery wear and charging and leave out the biggest annoyances with wired earphones: the cable gets tangled, and on earphones the cable is the weakest link, it's the first thing that breaks the majority of the time.
Listening to music vs participating on conference calls are separate use cases. You do not need high bandwidth Bluetooth or some other wireless standard for conference call audio.
When I'm cycling or running I cannot handle having a cable going around. Bluetooth does the job anyway.
If you care about quality of audio, then your requirements differs.
Apples loves to make something new/cool and then make it obsolete a year later. Honestly, frustrating as an Apple customer. Just when I jump to your new product you make it useless by changing something. Tech should be designed to last years, not months.
How is not supporting "24-bit/192kHz" making a headphone obsolete? I would be surprised if you can even hear a difference on any AirPods compared to "24-bit/48kHz".
Any serious blind listening tests for 256kbps AAC on specialized sites such as Hydrogenaudio puts that codec at the border of perceptibility compared to lossless.
Everything above is more deeply rooted in belief than it is in reality - or as the old adage goes: „Music fans use their equipment to listen good music. Audiophiles use good music to listen to their equipment.“
There are people in the community who claim to hear differences between digital cables using error correction as part of their protocol (HDMI).
Which Apple devices are most notable as part of the story of next-year deprecation? And won't compressed streaming still be the norm for almost everyone in the world for a very long time?
What do you mean? In 20 years they changed the laptop power adapter twice and the iPod/phone adapter once. iPad is the worst (2 in ~8 years) since it arrived late in the ipod connector life and adopted usb-c early. In iPad's case it aligned with similarly positioned products and was generally a good upgrade.
Compare to other laptops which never had a standard adapter, phones which were also not standard until they moved 3 times in ~8-10 years from barrel jacks to mini-usb to micro-usb and now usb-c.
I don't have a source but owned MBPs and Airs from the first up until USB-C, and I had more than 3 different power adapters including USB-C for them. But you may be right and I remember differently.
I still have not forgotten the A1234 dual dock that became useless when they released iPhone 4. Or the A1221 headset that barely worked at all and needed a separate charger after iPhone 4. Apple has a history of making accessories that are useless after 1 generation (after 1 year).
In this case, though, it's more like protecting the user from streaming higher bitrate than anyone can hear a difference in on that hardware anyway - which will help their battery life. We're talking about things like ear pods here, not a hi-fi stereo system.
They aren’t changing anything, your AirPods will still be using the same AAC codec it did when you first bought them.
Taking a step back from watching their every move or keynote is one way to keep being happy with your device, and you’ll probably find it does last years.
My Sony XM3 supports LDAC which is near-lossless and it was released a couple of years ago. No matter what you think, I can't justify a so called $600 "premium" wireless headphone not even attempt to support something close to lossless no matter how you put it. I really hope there's something Apple can do about this via software, otherwise, as an audiophile this is just full of compromises - all the way from requiring a case just to charge it, to the easy to slip from fingers design, to the smaller driver size (40mm) and now to not supporting lossless formats.
LDAC isn't lossless. The 'Hi-Resolution' ALAC files would all be lossy over it, and honestly anything else above 44.1khz that doesn't have a near flawless bluetooth connection.
This is a really baffling decision from Apple - normally they're very good about having tightly integrated ecosystems where everything just 'works' with everything else. That's kinda the entire value proposition in my eyes.
> normally they're very good about having tightly integrated ecosystems where everything just 'works' with everything else
As a long time Apple user (since the 90's), I feel like this hasn't been true in a while. The number one exhibit is the dongle-hell they've been putting their customers through for the last 5 years. I have a watch, phone, headphones, a computer, and a tablet by Apple. All were made in the last two years. I have to carry a handful of different cables and power adapters if I want to charge them all. It's a huge pain in the ass and is FAR from the definition of "just works".
They put out a $130 (!) charger that charges both phones (or anything else on Qi, including some of their wireless headphones) and watches[0], yet that uses their proprietary Lightning cable, not USB-C, so if you bring a tablet/laptop, you're still stuck needing to bring a second cable and possibly a second charger with you.
Meanwhile, over here in non-Apple land, I bring a single high wattage USB-C charger with me when I travel, which is sufficient to charge my laptop, phone, and headphones. Then I have tiny 1' USB-A bridge cables that I use to charge my other devices through my laptop (Kindle, battery pack, smartwatch). All pretty simple, all dirt cheap cables, no dongles.
Which is why Apple captures the most "profit share" and goes to extreme lengths to maintain "Apple families"[1] to dissuade compatibility/inter-op as much as possible.
1. As evidenced by Apple's internal arguments against iMessage on Android during the Epic hearings
I carry a dual usbc power brick, a usbc->usbc cable and a usbc -> lightening. This covers the MBP, iPad Pro, iPhone, APP. The watch is an odd one out so a special usbc->watch is required. This is true regardless of any other solution.
Now if we go backwards to the old MagSafe era I had to carry a power solution just for the MBP alone. If I wanted to use it as the power hub (could also do the same above) I could carry a usba->lightening (assuming all the other devices were still lightening), and usba->watch. But what normally happened is I also brought additional charge bricks.
I just don't see how 5 years ago was radically better than today. A couple devices still have lightening on one end /shrug. A cable with usbc on the other mostly hides that for now.
I wonder whether someone who resents the lack of lossless audio in wireless, noise-cancelling headphones shouldn‘t rather consider buying an analogue, wired pair instead?
Weel I don't think any bluetooth headphone would work with that.
I don't know how it works with headpones, but Douby Atmos you need at least 7 good speakers to make it work well in a movie:
* Front left
* Front right
* center
* surround pair.
* back pair.
Also you need a Subwoofer and if you gonna implement a perfect set, you still need 4 ceiling speakers to have the full effect.
This cost of entry is the reason I still didn't put it in my house.
You only have two ears. If you can isolate them, as in headphones, you don't need more than two speakers.
The reason Dolby Atmos requires so many speakers is that each speaker is heard by both ears, like in real life, as a result you need a setup closer to real life. In addition, in theaters, people seat all over the place, so you need even more speakers to widen the "sweet spot".
Strangely enough, Apple's most advanced ear pods/headphones and the like change the audio you hear as you move your head. Kind of gimmicky, but they do technically have a way to render spatial audio even though they only have two speakers.
Serious question: You only have 2 ears so theoretically headphones should be able to produce the same experience as 7 speakers in a room. I imagine that is a hard problem, but is my understanding correct? Anyone know how hard this is?
> you need at least 7 good speakers to make it work well in a movie
You don't - they use head-related transfer function filters (the Fourier transform of head-related impulse response) to make it work from a single pair of speakers.
There doesn't seem to be enough bandwidth, not even with BT 5.0's 2 Mbps LE 2M which just barely ekes out 1400 kbps after accounting for protocol overhead, just below CD audio's 1411 kbps data rate for 2-channel 16 bit 44.1 kHz LPCM.
Unless ALAC can consistently keep the data rate below 1400 kbps, I don't see how it's possible to have lossless audio over today's BT standards. And keep in mind that CD-quality is the lowest tier of Apple's new offering.
Apple filed a patent a few years ago for something called "high data rate" (HDR) that's capable of 8 Mbps. Maybe something will come of that eventually.
It feels like Apple released the lossless Apple Music update prematurely. I'd expect them to do the lossless update at the same time as introducing a refresh of their high end gear that is able to take advantage of the upgrade.
If my speculation is correct, did they do it early to get the jump on another platform? That also does not seem like an Apple style move.
It's been long reported that Apple has been undergoing a strategic shift to put more focus on their services business, and this seems like some more evidence of that. It's certainly a very "non-Apple" thing to do based on how they've operated for the past 2 decades. The release of lossless audio and the upcoming release of Apple Music on Android are two major features that add zero value or lock-in pressure for Apple hardware but add real value for subscribers of Apple Music.
Maybe their services teams are finally starting to be given leeway to operate as real businesses and do things to grow their revenue rather than just operate as growth channels for Apple hardware as they always have in the past.
Rough for Spotify - since Apple is including theirs at no extra cost, it becomes tough for Spotify to justify charging extra for it. So it goes from a feature that makes them money, to a feature that instead costs them money in engineering resources and bandwidth, before they even release it.
That depends if you believe Spotify users will switch to Apple just for that feature... if they won't, Spotify can likely still charge extra for it. Folks are pretty tied in to ecosystems at this point: if you've bought audio hardware that supports Spotify but not Apple, you're not going to switch.
Do you really expect to notice a difference on a HomePod? Most people have difficulties differentiating 128kbps mp3s from lossless using high-end hardware.
I’m just going to say it… it was a stupid feature to “announce” the way they did.
It should have been a bullet point mentioned in passing at WWDC… assuming they had anything at all about the Apple Music APIs to mention. This isn’t something enough people care enough about to generate hype or growth, releasing it out on its own just allowed them to draw attention in isolation which has cast it in a less than ideal light due to its shortcomings.
If they new (and they obviously did) that it was supported in such a statistically limited (percentage wise) subset when compared to their overall customer base’s listening habits… then making any fuss at all was kind of stupid. They now have the unforced error of having high value customers buying their most expensive products asking uncomfortable questions like “why did you sell me $1000 headphones that don’t support your best stuff?”
Tidal, Amazon Music HD, and Spotify hifi are all services aimed at selling higher fidelity Music at higher prices. Apple gives this away and they should just make it a bullet point?
I figured as much. Oh well, for serious listening I'll just get that lightning to headphone jack cable and plug it into my Schiit stack (Modi / Magni Heresy AMP/Dac)
Funny that adding a feature (lossless) to Music backfires for Apple, because physical laws and Bluetooth standards prevent it from working with their wireless headphones.
I think the iFi Nano iDSD is the best I’ve ever heard. The DAC/amp combination is incredibly clean. It has MQA, I don’t use that though, but if you wanted to it’s there.
The Rode AI-1 is a great desktop-only option too, also a fantastic DAC/headphone amp. It just bothered me because it allows digital volume control so you’d have to make it an aggregate device in OS X audio settings to disable that.
According to wikipedia, Apple did file a patent in 2019 for high bandwidth low latency audio streaming over bluetooth (up to 8mbps)[0]. Looks like they've been working on this for a while.
[0] https://www.freepatentsonline.com/y2019/0104424.html