Glitching attacks are typically performed by switching the supply voltage at quite high frequencies, a typical low-voltage detection won't trigger a reset under such conditions. And this is also why glitching attacks are often performed by spiking higher voltages, not lower. See for example Joe Grant's latest video on breaking crypto wallets [0].
Low-voltage detection is usually implemented as simple comparator which should trigger instantly, but often only on a single Vcc pin, and due to the decoupling caps found on a typical circuit design there is effectively an RC circuit that filters short fluctuations of supply voltage. So most low-voltage detection implementations only trigger on 'longer' periods of low voltage.
Traditionally low-voltage detection features (like brown-out detection) are there to guarantee functionality of the uC itself or the device the uC controls. It is typically not intended as a defence measure against these types of attacks. In fact, 15 years ago it may not have been much of a concern.
Similar story, I had a customer who wanted me to change the entire UI of a legacy application, because some information would not fit on the ancient 1024*786 15" desktop monitor of one employee, meaning he would have to use horizontal scroll constantly.
I recommended them giving this employee a larger monitor, not only would that be much cheaper than having me rebuild the entire UI, it would also boost this employee's productivity. Not to mention that swapping a monitor takes 10 minutes, changing a UI probably weeks.
Customer insisted to change the UI, because "if we give him a new monitor, everyone in the office will want one". I nearly got fired for responding with "Great! Then everyone can benefit from more productivity!".
In the end we did change the UI, I believe the total cost was something like 30k. The customer had maybe 15 employees, so new monitors would still have been much cheaper.
A few months later their offices were remodelled with expensive designer furniture, wooden floors and custom artwork on the walls. Must have cost a fortune. In the end, the employees still worked on ancient computers with 15" monitors, because new computers didn't fit the budget.
It is my understanding that the A18 CPU is pretty well understood already. AFAIK it doesn't have the new architecture that is keeping the Asahi team from supporting the M4 and M5 for example.
But I guess we'll have to wait for devs to get their hands on a Neo device
I'm not sure what Hector's personal choices have to do with not "trusting" a piece of software? It's open source, so if you don't trust the quality of the software, then just inspect it yourself?
Also, FWIW: Hector/Lina is no longer associated with Asahi anymore.
I'd take that claim by Mecha with a huge grain of salt.
How are they going to fund 7 years of support for a device that sells maybe a few thousand units? How are they going to guarantee they will still be around, and interested in maintaining the device drivers in 2033?
The Linux kernel project will remove the device drivers from the mainline kernel if they are no longer actively maintained and in use. So it is very likely that the support will be dropped from the mainline kernel way before 2033, as there probably won't be any users of this device remaining, and the original developers long gone.
Call me negative, but I expect that this company wil just vanish after some time. The team will just move on, maybe even start again under a different name, but there will be nobody to be held responsible for promises and claims they made in the past.
I can completely understand the skepticism, any startup today releasing something and promising to support will be taken with a grain of salt. I cannot guarantee that Mecha will not run out of business in 7 years. But at the very least we have the confidence to commit to 7 years of support, if we are able to keep the show going.
Why we are confident of extending this support -
1. The SOC from NXP is widely used in automotives and industries. Their support is listed till 2036, https://www.nxp.com/products/nxp-product-information/nxp-pro... which means their downstream will keep seeing updates. In above comment I mentioned that they follow 6 months+LTS release dates. To give an example, IMX6 that were released in 2011 are still actively supported in 2026. You can even buy SOMs and are still deployed in production.
2. The WiFi chip we are using is NXP IW612, again has longevity till 2038, which means it will still see its driver being updated and maintained.
3. Our audio codec is from Analog (MAX98090) again widely used and in production.
4. Most of our usb and power controllers are from TI, which can be expected to be around in the kernel for a long time.
5. None of the parts we've used are not recommended for new design or obsolete or come from unknown vendors. A lot of care has been taken in choosing the right parts?
From my point of view our work in supporting is to ensure we pull changes, run our test suites, see if everything works and repeat. What am I missing? There are no device drivers built that are exclusive to the Comet at this stage. You can review our device trees on our repos.
Also, we have a longer roadmap ahead of us - selling few thousand units in 5 days is no indicator of how things will be in the future. We are betting on this hardware and more hardware that we release later.
You can sit on the fence and keep expecting us to fail, that is your prerogative. But that doesn't automatically imply that we are ill-prepared.
2033 is not that far away. If they sell a few thousand items there would still be users, so the kernel would usually not remove the drivers if not defunct.
You might very well be right about the company, it is the likely outcome after all for all companies. But if the kernel support is seeded properly there should be a bit more time than predicted even then.
Also, positively: They did the communication on the website really well (I stumbled over the comet before), extended it nicely and the kickstarter campaign seems to be a big success. They have a good chance to stick around.
As someone who works on a device that was initially released in 2020 and uses a SoC from the same family (i.MX 8M Quad), 7 years of support is not unbelievably long (provided that the company stays afloat at all).
I do the same. I can SSH into my router at home (which is on 24/7), then issue a WOL request to my dev machine to turn it on.
You don't even have to fully shut down you dev machine, you can allow it to go into stand-by. For that it needs to be wired by cable to LAN, and configured to leave the NIC powered on on stand-by. You can then wake up the device remotely via a WOL magic packet. Maybe this is possible with WLAN too, but I have never tried.
Also, you don't need a Tailscale or other VPN account. You can just use SSH + tunneling, or enable a VPN on your router (and usually enjoy hardware acceleration too!). I happen to have a static IP at home, but you can use a dynamic DNS client on your router to achieve the same effect.
I used to work as a freelancer back in the days. I worked a lot for a customer became a good friend. At first I'd work on his projects, but this ultimately shifted to a model where I'd work on projects for his clients, I would bill him, and he would add his margin and bill the end-customer. It worked out great this way.
One day I got a call from him saying that our 'mutual' customer had an urgency job. They were supposed to do a national roll-out of a new payment system, but seemed to have forgotten about a bunch of legacy PoS systems that were still operational and couldn't easily be replaced. Because I was seemingly the only one that was still familiar with this particular system (I worked on it once in the past), the end-customer approached my friend whether I would be available to do this quick. This was in late November, and the rollout was planned for Januari. Because this end-customer is a government org, I realised we'd be guaranteed they wouldn't be working during the holidays (which, in my country is typically 2 weeks for Christmas and new-year's), so really we had only 10 days or so to get it done in time for their team to test it before they holiday shutdown.
I didn't feel like doing such a complex job on such tight deadline. So, I quoted a much higher rate than normal. I also quoted for a multitude of hours that I thought was required, due to the typical overhead that this large end-customer would surely incur. Finally I also added a retainer fee, because I knew that if problems would occur (likely on the last day before the rollout), I'd have to drop anything I was doing and work for them.
I got the job.
I worked feverishly to meet the deadline. I cancelled commitments on other projects, paid an extortionate amount for testing hardware and overnight delivered to my office, bought very expensive testing gear, signed all the NDA's required to work on PoS card payment interfaces, etc. I then worked basically round the clock for 10 days straight to get it done. I did get it done in time, submitted the code to the repository and fired an email to the team-manager that it was in fact done a day early. ...I was greeted with an auto-reply the manager would be on holiday till mid-January, which was the week that entire new payment system had to be rolled out nation-wide.
I wasn't feeling great about it, but my friend urged me to send the invoice for the work I had done, and also the retainer for the rest of December and January. This would allow the customer to write of the expenses in the current calendar-year. I sent the invoice, it was the most amount of money I'd ever invoiced, and I'd normally invoiced per month, this was for a mere 10 days.
December passed, no response from the supposed review team. I stayed on stand-by, declined any other work, stayed sober during the various new-year's office parties, always brought my laptop along, etc.
January came and went. Still no response from the code review team. The new payment system was due to be rolled out mid-january, but nothing had happened. The company had done extensive ad-campaigns beforehand announcing the new payment convenience for their end-users, so the only 'feedback' I saw were frustrated users on Twitter. I still felt bad about charging for the retainer.
This kept going. At some point I did stop sending invoices for the retainer. My friend always paid me in advance (the end-customer was notoriously slow to pay, though did always pay in the end), and I didn't want to cause him too much exposure.
To my knowledge, the software I wrote was never used in the end. To the public it was stated that the PoS systems were simply too old to be upgraded (not true, obv) and that they'd replace them 'soon'. It is now 4 or 5 years laters, the old PoS terminals are still there, sans the functionality I added.
By pure coincidence, years after the job I found out that an old friend of mine, who was also a freelancer at the time, was tasked around that same time by the same customer to do a code-review of a supposed PoS system upgrade. Without realising, he reviewed my code! He was under the same time pressure, and did the code review during Christmas to deliver the results on time before the national rollout in mid-January. He also charged a huge amount of money for it, was also paid, and also never heard about it again. At least he said he remembered being impressed by the quality of the code, and didn't find any defects. So that's about the best outcome of the project I guess.
My takeaway from this: If you are a freelancer, and a large customer wants something done in a hurry, charge more than you ever dared, don't feel bad about it. You'll find that suddenly there isn't as much of a deadline anymore. If the customer declines due to the price, you should be happy for dodging a bullet.
I bought the top of the line TV from Samsung in 2011. The 'smart' functionality services went offline after a year or two, which means all 'smart' functions no longer work and I am now happily using it as a dumb TV.
Eventually every smart TV becomes dumb when they inevitably shut down the backend services.
> Eventually every smart TV becomes dumb when they inevitably shut down the backend services.
Except that on newer tvs all the nagging will still be there, all the ads will be "frozen" in time (mine has ads for stuff from 2023, the last time I connected it for some firmware update that _GASPS_ actually fixed some things) and some features may depend on internet connectivity. The manufacturer may care to release a final update and solve these issues, but you know they are much more likely to fraudulently just disable features that worked offline as a last middle finger.
Repeat with me, SaaS is fraud. Proprietary digital platforms are fraud.
I've had a very similar problem with my cable internet circa 2010. It must have been DOCSIS 3.0. Multiple times a day my connection would stop working completely. The modem's 'connected' and 'carrier up' and 'carrier down' lights were on, and I had LAN communication with the modem, but no data would pass though on the WAN side.
From the management page of the modem (I later learned you weren't supposed to know about) I could see the upstream and downstream carriers were correctly established and still operational, but on the IP (PPPoE) level the TX (upstream) packet counter was increasing, but the RX (downstream) packet counter did not. Releasing the IP on my router (remember, it was PPPoE), then waiting 10 minutes or so before renewing the IP via DHCP would bring connectivity back.
I would call to my ISP (the largest ISP in my country) to try to resolve the issue. Every. Single. Time. I had to explain to the support employee that yes, I did disconnect and reconnect power, yes, my computer's software was up to date, yes, I did try connecting via LAN directly to the modem to eliminate any possible router issues, etc.
Now, at this point in the story I should point out that I held a degree in electrical engineering, specialising in embedded systems and high-speed data transmission and also had just about all Cisco networking certifications. I was more than qualified to design cable modems myself, imagine the frustration wasn't able to fix this issue.
One night I came home to the same problem, called customer service again, fully prepared to do the 'dance' of answering every basic troubleshooting question. But to my surprise, the guy on the phone seemed legit knowledgable. When I described him the symptoms I saw from the modem's management page he was rather surprised that I managed to discover that functionality, but said he knew what the problem would be then.
The support employee was quickly to confirm that someone in my neighbourhood hard-coded his IP-address instead of allowing DHCP (a common trick back in the day to get a static IP on a residential cable connection), and that that IP was clashing with the IP their DHCP would assign to my router's MAC address. He asked me what brand of router I had, and had to explain to him that it was a self-built OpenBSD box. His response was: "great! then you probably know how to spoof the MAC on your WAN interface then?". I did, I changed my MAC to a value he gave me, and immediately my connection came back up. He explained me that any MAC address starting with AB:BA (named after the band) was reserved for a special block of customers with this kind of issue.
We continued chatting a bit about DOCSIS, networking technology, modulation types, OpenBSD (it was also his favourite OS) and much more nerdy stuff. At some point I asked him, respectfully, how someone with his knowledge ended up at the support helpdesk of an ISP. He then told me he was the ISP's CTO, in charge of all network operations, and that he was just manning the helpdesk while his colleagues were on a diner break...
Man, I remember when you told this story some years ago, and I still very much like it !
(I have an hnrss.org feed with all comments mentioning OpenBSD, so, I was bound to catch it).
What a jump, I'd be curious to hear first why anyone would prefer Intel above pretty much anything else, but also secondly how the actual experience difference between the two after working at both, must be a very strong contrast between them.
On her website it says she is working on GPU drivers there - I wouldn't be surprised if that's something she greatly enjoys and Intel gave her then opportunity to work on official, production shipping drivers instead of reverse engineered third party drivers.
Maybe she was given a huge signing bonus to avoid her working on making X86 irrelevant? Combined with perhaps some interesting project to work on for real.
I wouldn't have thought so 5-10 years ago, but with Microsoft offering Windows on ARM the is really no OS that specifically targets x86 (Legacy MS products will keep it alive if the emulation isn't perfect).
The thing is, x86 dominance on servers,etc has been tied to what developers use as work machines, if everyone is on ARM machines they'll probably be more inclined to use that on servers as well.
Microsoft has tried Windows on ARM, like, 5 times in the past 15 years and it's failed every time. They tried again recently with Qualcomm, but Qualcomm barely supports their own chips, so, predictably, it failed.
The main reason x86 still has relevance and will continue to do so is because x86 manufacturers actually care about the platform and their chips. x86 is somewhat open and standardized. ARM is the wild, wild west - each manufacturer makes bespoke motherboards, and sockets, and firmware. Many manufacturers, like Qualcomm, abandon their products remarkably quickly.
Huh? Qualcomm announced the X2 chips just 2 months ago with shipments for early next year. Looked at a local dealer site and there's MS, Dell, Asus and Lenovo WinArm machines (with current gen Elite X chips).
Yes, Windows on desktop hardware will probably continue mainly with x86 for a while more, but how many people outside of games, workstation-scenarios and secure scenarios still use desktops compared to laptops (where SoC's are fine for most part)?
1: It's not meant to be cute but rather incredulity at a statement of declaring something to having failed that still very much seems to be in progress of being rolled out (and thus indicating that it'd be nice to have some more information if you know something the rest of the world doesn't).
2: Again, how are they failures? Yes, sales have been so-so but if you go onto Microsofts site you mostly get Surface devices with Snapdragon chips and most reports seems to be from about a year ago (would be interesting to see numbers from this year though).
3: Yes, I got a new x86 machine myself a month back that has quite nice battery life. Intel not being stuck as far behind on process seems to have helped a fair bit (the X elite's doesn't seem entirely power efficient compared to Apple however).
4: Yes, _I_ got an x86 machine since I knew that I'd probably be installing quirky enterprise dependencies from the early 00s (possibly even 90s) that a client requires.
However, I was actually considering something other than wintel, mainly an Apple laptop. If I'm considering options and being mostly held back by enterprise customers with old software I'd need to maintain the moat is quite weak.
My older kids previous school used ARM Chromebooks (currently x86 HP laptops at current upper highschool but they run things like AutoCAD), the younger one has used iPad's for most of their junior high.
Games could be one moat, but is that more due to the CPU or the GPU's being more behind Nvidia and AMD. Someone was running Cyberpunk 2077 on DGX Spark at 175 fps (x86-64 binary being emulated.. )!
But beside games and enterprise...
So many people that are using their computers for web interfaces, spreadsheets, writing, graphics(photoshop has ARM support) and so on won't notice much different about ARM machines (why my kids mostly used non-x86 so far), it's true that such people are using PC's less overall (phones and/or tables being enough for most of their computing), but tell a salesman Excel jockey that he can get 10-20% more battery life and he might just take it.
Now, if Qualcomm exits the market by failing to introduce another yearly/bi-yearly update then I'll be inclined to agree that Win-Arm has failed again.. but so far it's not really in sight.
I imagine there's also some challenging work that would be fun to dig into. Being the person who can clean up Intel's problems would be quite a reputation to have.
There’s a real limit on what level of problem one engineer can fix, regardless of how strong they are. Carmack at Meta is an example of this, but there are many. Woz couldn’t fix Apple’s issues, etc.
A company sufficiently scaled can largely only be fixed by the CEO, and often not even then.
I'm sure most would stay at valve if they could. The just do so much contract work, and I'm sure a stable job at intel is better pay, benefits and stability.
Would it shock you to hear that famous engineers with their own personal brand power have different opportunities and motivations than many/most engineers?
Their point is even made stronger by your comment. Engineers of this type don't experience megacorps like regular engineers. They usually have a non-standard setup and more leeway and less bureaucracy overhead. Which means brand isn't the biggest thing, the specific projects and end user impact are.
Low-voltage detection is usually implemented as simple comparator which should trigger instantly, but often only on a single Vcc pin, and due to the decoupling caps found on a typical circuit design there is effectively an RC circuit that filters short fluctuations of supply voltage. So most low-voltage detection implementations only trigger on 'longer' periods of low voltage.
Traditionally low-voltage detection features (like brown-out detection) are there to guarantee functionality of the uC itself or the device the uC controls. It is typically not intended as a defence measure against these types of attacks. In fact, 15 years ago it may not have been much of a concern.
[0] https://www.youtube.com/watch?v=MhJoJRqJ0Wc
reply