Same sources, 電子時報(digitimes),中國時報(chinatimes),經濟日報(economic daily),工商時報(commercial times). Known for publishing false rumors to move stock market.
The rumor is 180000 wafers for GPU next year, which is impossibly large. In comparison, AMD book 200000 wafers for all of their CPU, GPU, and PS5/XBOX cpu.
I guess there can be an element of truth (10%? 1%?) in the reporting, just greatly exaggerated and distorted - it won't surprise me if Intel actually had a minor order on something from TSMC.
But I do understand your cynicism on Taiwanese media. From the PTT link, I see one comment says,
> 現在放消息都會被當出貨文.....
Which translate to: Right now, whatever the news leak says, it's all believed [by people] to be a pump-and-dump story.
People who try to move stock market have certainly some power (or at least lot of money or it's useless) but that doesn't mean that people who lost money or the regulators are okay with market manipulation.
Maybe they can't do anything about it. I admit my knowledge of finance is limited. I've read about the SEC doing investigations after Elon Musk tweet for example.
I am pleased TSMC is moving the industry forward, but I fear we are moving from an Intel monopoly to a TSMC monopoly. As a consumer the result will be the same.
As a customer I'm somewhat happy Intel finally pays for their anticompetitves practices (karma wise) but at the same time I'm worried about the US losing its advantage in chip manufacturing.
Do we know what really went wrong at Intel ? I know they made a big bet on a technology that didn't pay off. Is it that they just had no competition for years and became complacent ?
I also wonder how much TSMC will ask Intel to split the company to ensure that their process does not leak to the Intel side. There is probably lots of interesting details about what works you can obtain if you work closely with TSMC engineers.
Now this doesn't come from any position of authority or knowledge of the industry but if I had to guess I'd say their problem has been twofold: shareholder value and lack of competition.
Intel, for years, has been the standard choice for the datacenter: AMD never really had a look in there. As a result, Intel has still been making tons of money for years and, by extension, the shareholders have also been making it too so there hasn't been any real reason to take too many risks or change direction especially as AMD was a fraction of the size.
AMD have only just recently (last couple of years) been an actual, quantifiable, threat to them so, if I had to guess, Intel are in a bit of a panic as the world makes a slow shift to AMD (possibly ARM and more specialized chips too!).
I can imagine that the lead time for chip designs is measured in years so Intel can't just make a massive direction change overnight, hence the reason to ask for external help.
Time will tell if this turns out to be a brilliant move for Intel...
That's the opposite of what I'm hearing, which is that Intel placed a large bet on a risky basket of technologies in order to stay ahead (specifically cobalt wiring and contact-over-active-gate) and burned enough time trying to get them working that TSMC was able to get their own relatively safe basket of technologies working in the meantime.
10nm was the first node to use the problematic features, but Intel certainly hoped to use the same technologies at smaller nodes as well. Cobalt and COAG become more relevant, not less, with finer features.
As for whether or not they will eventually work, that's the many-billion dollar question. In mid-2018 the boots-on-the-ground were not hopeful (the "10nm will never work" leaks) but clearly someone at Intel thought there was hope because they didn't just abandon the features. I think we just saw that "someone" get fired, though, so I suspect the attempts to save these features have failed.
Thanks. I guess it's difficult because 'working' is not binary, rather a question of yields etc and how those are progressing as the process matures. Intel can clearly manufacture 10nm parts - I have one in my laptop - but the economics need to work too.
supposedly the 10nm problems were related to the quad-patterning that Intel was using to reduce cost until EUV became more economical. But now the EUV-based 7nm is apparently having problems too.
It's certainly possible that EUV fixes everything and 7nm is a relatively smooth node, maybe this is just the one oops and it's all smooth sailing from here, but I don't think anyone really believes that. Everyone is worried it's going to be 10nm all over again where the schedule slides 4 years in 4 years, 6 months at a time.
Everything I've heard from Intel engineers is that they have a culture problem. It's a high-stress, low-empowerment environment with overbearing, micromanaging bosses and over the last ten years as AMD faded out the beancounters overtook the engineers and started making decisions that hurt the product. The middle managers and beancounters are in control and mostly concerned with (respectively) cover-your-ass and rent extraction than engineering the next generation of a great product. A quote from someone else was that he guessed they needed to cut away "the top 3 layers" to get the rot.
Unfortunately, rototilling the staff to cut away the rot will probably produce even more delays in the short term. Institutional knowledge gets lost, people start to look for other job opportunities, etc. Intel is in deep shit.
I don't even know what the short-term fix would be. The designs are tied to the process, the process is fucked because the approaches they're using are fucked and don't work, and they aren't going to repeal the laws of physics in the next 2 years. I guess you produce it even if it's not what the profit you normally expect, just to try and maintain marketshare?
Middle term they need to find a Jensen Huang or a Lisa Su. An engineer-CEO who understands the tech intimately and the tradeoffs being made, and where the market is going.
Comedy option, merge with NVIDIA, and make Jensen the CEO. AMD wanted to do it back in the day, AMD's board just couldn't find their way to "yes", which was a massive mistake (bulldozer probably would not have happened under Huang), led them to acquire ATI at a hugely overvalued price, and the resulting financial troubles caused them to lose their fabs. His work culture is also pretty toxic but they actually get shit done like few other companies. I'm really only half joking, but it'll never happen.
This sounds awful. I can't think what it does for morale when they spend $900m on Moovit. Certainly doesn't imply management focus on the most important issues. At least investors understand - the questions on the earnings call were basically all about 7nm.
Jensen taking over would certainly be interesting!
They spend $x billion on whatever acquisition of the week and they take away the quarterly ice cream social and the free fruit out of the caf. That’s a specific complaint I heard, yeah. And that’s Bob Swan's doing.
The old adage about "when they take away the free soda it's time to start looking for the exits" rings as true as ever. It's fine if you don't ever offer it in the first place but when the CFO is suddenly checking the phone booths for leftover change, there are bigger problems.
What ultimately seems to stifle US companies, is management, bureaucracy, and "value to shareholders". They seem to forget their engineers, and what their business actually is.
IBM in the past, Boeing recently, Intel right now.
I agree in fact Tim Cook even called out to major share holders of Apple that if you are looking for immediate value you are in the wrong stock and should immediate leave. What companies need are bold leaders like Tim Cook that puts a foot down and say the value to customer is more important than teh value to share holders. Share holders are in a way their own worst enemy.
It isn't just the USA. The natural state of things is to not be run that well. Any well run company will tend to descend and get worse, revert to the mean. It happens to everything, everywhere.
> As a customer I'm somewhat happy Intel finally pays for their anticompetitves practices
They already paid for that several times over. AMD only exists today because Intel was forced to license their technology. Intel has been artificially restrained for decades. Competition all over the world has benefitted massively from that.
Now of course people conveniently forget that history and pretend Intel wasn't made to fight with one hand tied behind its back for three decades.
There's nothing quite like watching the tied-down champ get beat, and then watching the crowd gloat about it.
The US did this to itself by crippling Intel. The US always does this to its champions, meanwhile competitor nations do the exact opposite, aggressively promoting and shielding their most valuable companies. Then people wonder golly gee what happened to such and such company, we tied its hands behind its back and it got beat to a bloody pulp, how could that outcome happen. Get up champ, take some more punches.
Anyone think China is going to cripple its strongest companies? South Korea is going to break Samsung into pieces? Taiwan is going to break up TSMC or force it to license its technology to Intel to spur competition? Germany is going to break up its oligopoly auto companies? Of course not, nobody else behaves that way. Meanwhile the US is going after its most powerful tech companies and always does. US competitors laugh their asses off as we make everything easier for them.
AMD was originally licensed access to Intel's 8086 and later designs because customers demanded a 'second source' manufacturer in case Intel had manufacturing problems (sounds familiar!)
Intel tried to ditch second sourcing with the 386 but AMD was able to continue manufacturing compatible CPUs.
Much later Intel was accused of anti-competitive behaviour by AMD in various markets - which resulted in a fine in Europe and a $1.25bn dollar payment to AMD and patent cross licensing.
But - AMD licenses some important technology to Intel eg x86-64.
It pre-supposes that what was best for the USA and world was Intel being maximally successful, rather than there being a market for x86 chips. That's not clear at all. If American capitalism is defined by anything it's competition, something Intel has sometimes unfortunately tried to squash.
The real answer here is not to come up with artificial Intel-specific legal kludges like forcing them to cross-license to AMD, but rather to adjust copyright and patent law to define APIs and ISAs as non-protectable.
I don't think this is correct, because TSMC isn't so vertically integrated. There is not a "TSMC CPU" or GPU per se, they're providing a service to other businesses, not in direct competition with every other design. That doesn't mean there couldn't potentially be adverse results, but if so they'll probably be different adverse results since TSMC is entering more into a "basic infrastructure" role vs B2C. Chip design matters a lot to ultimate performance/efficiency, and fabless has promoted rather then reduced competition there. Their customers also have very deep pockets themselves, and TSMC has a lot to gain from keeping demand high with improvements.
In fact a potentially more worrisome aspect then any immediate competition threat is the sheer reduction in redundancy of critical production (as we've seen with hard drives and NAND production for example). TSMC has four of their 300mm gigafabs in Taiwan and a 5th in construction, and another four 200mm fabs. They've got a single 300m in China, and that's it for that size. They've got a 200mm in China, one in the US, and one in Singapore. A lot of capacity is in one small island.
Taiwan is on the ocean around the ring of fire. I know TSMC has of course done major disaster prep, but it's not impossible for the world to throw curveballs. Taiwan has had earthquakes in the last 100-someodd years up to magnitude 8.3, but only 7s for the last 50+ (the biggest more recent one for them I think was 1999). Of course construction codes (and major manufacturers even more so) take earthquake resistance into account. But what if there was a 9er and a typhoon say?
And of course there is geopolitics too, China has made no secret of its desire to retake Taiwan but haven't been taking recent actions to make it attractive and peaceful. And of course the very presence of so much critical-to-US production there enhances the risk (hope?) that yes, the US will in fact honor its defense agreement and cause the whole thing to get very hot.
So not that other fabs wouldn't be good for competition reasons too, but redundancy may be valuable in and of itself. However, if so it's not something the market can likely provide, the market tends to optimize out that kind of redundancy since it is by definition not efficient. Fabs represent enormous capital expenditure with long lead times that requires extremely large scale to operate profitably, so governments would probably have to step up and flat out pay for redundant capacity. And arguably should, but even at the government level $10-20b isn't nothing.
If there is one company with the will, the liquid cash, and the volume to break away from TSMC, it is Apple. Just saying.
They design their own chips in-house, they are moving to use them across their full stack, they are already TSMC's #1 customer by a wide margin, and they could literally build a 3nm fab with 40k wafers per month with less than 10% of the liquid cash they have on hand ($5b for the node and $15b for the fab). And they are the type of company who brings everything possible in-house, and the type of company who looks at a single-source monopoly like TSMC as a business risk. Fabs are literally the last part of their hardware/software stack that they don't control and that another company could potentially leverage them over.
Right now there is no alternative to TSMC, people literally pre-eulogize NVIDIA Ampere and other products simply because they're on Samsung instead of TSMC. How do you think that looks to a company like Apple, that they are completely dependent on this company for node tech and subject to increasing competition for wafers as everybody else realizes there is no alternative and piles in too?
Well, not quite. All foundry players, eg, TSMC, Intel, and Samsung invested heavily in ASML in effort to jumpstart EUV development. They significantly reduced their financial stakes to a low single digit since as the tech matured.
I feel like HN found out about ASML a few months ago and now mentions it at any opportunity.
There are loads of companies in the supply chain for fabs, all of them specialising in vital bits of equipment that nobody else can replace. ASML is not the only ASML.
Probably the best way to break that monopoly would be to deny their machines to a largish country that would be willing to fund its own domestic competitor ..... but why on earth would that happen?
Yeah, well, that was my point - the US government's current bad behaviour will just create a stronger competitor - we'll all probably benefit as a result
there are only few foundries that can afford or want their machines. Globe Foundry vow out of 7nm race and SMIC can't get US's approval. there are about a hand full of companies that want their machine. Samsung, Intel and TSMC.
Because to change to another fab you "just" have to get a single transistor (or a few devices) working (and duplicate that a billion times). Changing to another ISA requires a lot of redesign.
> It is NOT protable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that's all I have :-(.
And he meant it too, as later in the thread when someone says they don't believe him about the non-portability:
> Simply, I'd say that porting is impossible. It's mostly in C, but most
people wouldn't call what I write C. It uses every conceivable feature
of the 386 I could find, as it was also a project to teach me about the
386. As already mentioned, it uses a MMU, for both paging (not to disk
yet) and segmentation. It's the segmentation that makes it REALLY 386
dependent (every task has a 64Mb segment for code & data - max 64 tasks
in 4Gb. Anybody who needs more than 64Mb/task - tough cookies).
This is a historical curiosity of course. The Linux kernel supports more architectures than anything else.
Mostly because of the slow adoption which allowed a lot of flexibility. It's much easier to move around when you don't have the baggage of millions of users and applications to consider.
Even Windows ran on plenty of non-x86 architectures usually because they got in on the ground floor before there was baggage to migrate. And even now you can run it on ARM but without much traction because people hate losing functionality.
Was there a effort to make it cross platform, or just the tech that looked 386 only at the time like a MMU becoming more widespread (I remember a fail0verflow talk on ps4 linux a while back where they mentioned a MMU and clock as requirements to port Linux to an architecture)?
Linux has supported a lot of CPU architectures pretty much from childhood but never had any legacy to carry over. Legacy is the real killer when talking about such migrations and the bigger the footprint with consumers, the bigger the legacy.
Linux never had a significant footprint in any long existing segment at the time they migrated from one architecture to another, especially not in a consumer one. Take Apple and their move from 68K to PPC to x86 to ARM.
This is absurd. Intel’s fab capacity is several times TSMC’s so even if intel had chosen to do this there is no way TSMC would be able to satisfy Intel’s needs. In addition most of TSMC’s capacity has already been committed.
That’s the business side; ignore the technical errors like 3 nm.
There’s already a comment saying this is market manipulation; I just wanted to point out the logical fallacy to avoid “is no” — “is so”.
Intel already confirmed on their earnings call that Ponte Vecchio will be based on "external process technologies", so while yes, 6nm is (probably) an error, Intel is 100% certainly running their GPUs on TSMC.
They're not moving everything to TSMC, just this specific line of products - which is a new line and would have required additional fab capacity somewhere. That's either 14nm (completely full), 10nm (massively low yields on something this big), or outside the company.
Intel has still good profit margins, much better than AMDs. But Intel has rarely been the best in microarchitecture. It has been almost always the superior process technology that gave them the edge over competition.
If Intel is forced to compete against Nvidia and AMD as fabless for several years, it's very bleak future for them.
Intel going fabless is like Boeing ordering aircraft from Airbus. Desperate but temporary. Intel can't cut their core business because everything is build around it. They can fall but they can't quit.
"TSMC internally does not consider orders for Intel's processors as long-term ones, and therefore is unlikely to build additional production capacity for the new contract if it comes, according to industry sources." https://www.digitimes.com/news/a20200728PD201.html
To work for TSMC it would have to be a long term commitment. But Intel has the cash to pay for it and given recent events it would boost investor confidence if Intel could say (e.g.) 30% of our CPUs in future will be made by TSMC.
>Intel going fabless is like Boeing ordering aircraft from Airbus. Desperate but temporary. Intel can't cut their core business because everything is build around it. They can fall but they can't quit.
I think it's more like both Boeing (Intel) and Airbus (AMD) buying ordering their planes from say Samsung - a company that is a relatively neutral third party, that has busines from many sources and might simply not prioritize your order due to other more lucrative ones.
Intel had undeniably better microarchitecture between the launch of Conroe up to at least the launch of Zen2 (and arguably still has better microarchitecture for some tasks, just not across the board like they used to).
Not just process - Bulldozer first-gen was a 32nm chip and couldn't keep up with Nehalem (45nm) let alone Sandy Bridge (32nm). AMD was significantly worse microarchitecturally for pretty much a decade.
They were ahead with Athlon 64, starting to fall back to equal with X2, Phenom they were moderately behind, bulldozer was a dumpster fire, Zen1 was a dumpster fire, Zen+ was moderately behind, Zen2 back to on par in general performance and better in core count/server chips.
> But Intel has rarely been the best in microarchitecture.
When is the last time Intel was behind in microarchitecture? Ice Lake beats Zen 2, then you follow all the way back through Bulldozer, where AMD was utterly outclassed, then what? K10 vs. Sandy Bridge?
Zen 2 is basically a copy of Skylake, made 5 years late, that only got to compete with Skylake because Intel froze in time.
>When is the last time Intel was behind in microarchitecture? Ice Lake beats Zen 2
I'm not sure whether you can make that conclusion just because the two products have similar performance. zen 2 has better IPC, which is a better indicator of microarchitecture than raw performance. In the case of Intel, the only reason they're still ahead is because of the high clocks from a very refined 14nm process.
Both have a 224 entry ROB, 92 vs. 97 entry scheduler, 180 entry PRF, 168 vs. 160 FP PRF, same number of integer ALUs, similar load/store width, 72 in-flight loads, 56 vs. 48 in-flight stores, etc. Consequently they perform within margins of each other. They're about as minimally different as competitors' products can be.
Contrast to something like Arm's or Apple's cores, which are qualitatively different products, both to each other and to any of Intel or AMD's chips.
>Both have a 224 entry ROB, 92 vs. 97 entry scheduler, 180 entry PRF, 168 vs. 160 FP PRF, same number of integer ALUs, similar load/store width, 72 in-flight loads, 56 vs. 48 in-flight stores, etc.
They have similar metrics, that doesn't mean they're copies of each other. The underlying designs are most certainly different (eg. chiplet vs monolithic). The reason they have similar in those metrics is probably due to convergent design, ie. these combination of functional units provide the best performance to transistor ratio. You see this in nature too. eg. https://en.wikipedia.org/wiki/Convergent_evolution#Opposable..., which look similar at a high level, but are actually different in implementation.
AMD's previous design was its own thing. Arm's and Apple's designs are their own things. There's still flexibility in how to build a core. Skylake is clearly not even the optimal target; Apple's approach easily outclasses it, for instance. And yet AMD did none of that; they build a core that is practically just Skylake, with no real innovation I can spot anywhere.
There's nothing wrong with this as a strategy. AMD's own designs were performing pretty badly and Skylake was a straightforward, reliable target that would lead them back to competitiveness. Their uncore is plenty innovative, given their willingness to compete on scale.
But a rose is a rose is a rose. Zen 2 is arguably more similar to Skylake than Ice Lake is to Skylake.
'Rarely' is probably overdoing it but Intel has had significant architecture mis-steps. It's been able to come back so strongly in part because of it's manufacturing leadership. Once that is gone though it will be much harder.
Is anyone other than me truly shocked at Intel's fall from grace here? Intel outsourcing their chip fabrication because they're basically unable to do it themselves. Like... who would've seen that coming 10 years ago? Or even 5 (back before 10nm was the debacle it became)?
I realize Intel recently ousted their chip chief (who, as I understand it, was poached from Qaulcomm a few years earlier) but if I were on the board, I'd be looking at this situation and calling it what it is: a massive failure in leadership and the pissing away of one of the greatest industry leads and I'd be going further and cleaning house entirely of the executive team.
Oh and free advice for the board: don't put a bean counter in charge of a hardware company. Seriously.
>don't put a bean counter in charge of a hardware company. Seriously.
Or pull a Boeing and move your management away from having a tech background and moving them half way across the country from the rest of the company too.
Why would Intel not be focusing on protecting its x86 business and working through the issues on getting CPUs onto TSMC first rather than a product which is currently generating zero revenue for Intel?
Just to clarify - not a point about Intel using TSMC at all - just not clear why they would put Ponte Vecchio on TSMC first when CPUs should be the priority.
Protecting its x86 business by outsourcing & relying on an external supplier for one of its core competencies?
They've announced they have contingency plans to hedge against further schedule uncertainty (assuming that means using TSMC), but their #1 priority would be to use their own fabs.
Protecting their x86 server business should be their number 1 priority right now. Customers don't care if the product is manufactured in Intel or TSMC Fabs and the margins will still be more than satisfactory.
If the x86 based product falls behind customers have an incentive to move to ARM made on TSMC (Apple being the obvious case in point). In this respect AMD is doing Intel a long term favour by giving firms an x86 fall back.
That's fairly short-sighted, a 200B company shouldn't be protecting their core business by handing over their manufacturing reins to effectively create a monopoly that most of the world relies upon for chip manufacturing to make up for a 6 month delay, whilst still incurring the massive fixed costs of their existing fabs.
How much more secure would their business be if they're at the mercy of Monopoly supplier with no competition?
At least Apple's business is protected as they're a value-added supplier with a near impenetrable network monopoly to prevent competition, but most of Intel's business is chips, conceding manufacturing of them to a single supplier who's now worth nearly 2x Intel, who are big enough to buy ARM or merge with AMD seems unwise for their long-term prospects.
I didn't say they should hand all their manufacturing to TSMC.
Rather, if they have fallen behind on manufacturing (which they have) and decided that they need to access TSMC fabs (which they seem to have) then the priority should be x86 not Ponte Vecchio.
Why? Because whilst they are behind on process some firms have an bigger incentive to switch to ARM (eg Apple). Once they have gone to ARM they are much less likely to come back to Intel x86. If they can keep them away from ARM they have a chance of getting them back if/when they get their manufacturing back on track.
Either way conceding manufacturing of their flagship products to their competitor is not without consequence, and shouldn't be their primary preference - it's their contingency plan, as it should be.
Sure, their primary preference would be to use in-house, but their flagship products today are being built using either an old or a problematic process node and that's a big problem that is not going to get sorted in 6 months.
Reading the earnings call I think its reasonably clear that they will be going outside for at least some of their manufacturing - the 'contingency plan' is really around what happens if things get worse.
This move might be bad for protecting margins but is great for protecting business - AMD will have to pay more if TSMC is using 100% of its production capabilities.
They can do both, still attempt to fix their fabs, still use existing fabs and compete with lower prices (imagine Intel to be the budget choice, ironic) and create super fast CPUs on TSMC for some part of the market that wants the extreme performance.
The world-wide PC market including Intel and AMD totaled about 290 million units last year. 190 million of them were laptops.
Apple sold about 200 million iPhones, 70 million iPads and 60 million Apple Watches (and not including other A-series devices such as T2 chips, Apple TV, or HomePod) All of these are split among TSMC’s 10nm and 7nm parts. So it’s fair to say that TSMC with investment in fabs can work on that scale. I’m intrigued that Intel would be chasing the low cost 6nm however.
> I’m intrigued that Intel would be chasing the low cost 6nm however.
My guess would very much be availability of delivery dates; I would be unsurprised if the first couple years of 5nm was largely sold out, and provided you can deal with the lower density 6nm (which is, after all, primarily just an incremental 7nm improvement with the same design rules, etc.) it allows them to bring it to market sooner.
Not sure an Italian wouldn't be irritated by "old bridge" as the name of a supposedly high-end electronics product even when (I'm assuming) most if not all Italians of course know the place; "Ponte Nuove (Nuovo?)" could make some sense at least. What's with the bridge metaphor and cultural appropriation anyway?
While I do read wccftech from time to time, one must realise they are not a reputable site. You need to read everything with the largest grain of salt.
If we could also stop posting or upvoting anything from wccftech would also be great.
I think that the radius of a Silicon atom is about 0.1 nm. ~1 nm may be the limit of what's possible. The next nodes after 3 nm are likely going to be 2 nm and 1.4 nm.
3 nm is not available yet, though. Current best is 7 nm, I believe.
Is the process of making smaller scales possible redundant or does it depend on new findings in science that no one anticipated?
I'd really like to know if the fab producers repeat some kind of procedure to create smaller chips or if a new prodigy to make breakthrough findings has to be found every other year to adhere to Moore's Law.
Making transistors and connections is a bit like Minecraft: you need blocks to create structures and in this case the blocks are atoms of semiconductors.
It means that, leaving whatever physics considerations aside, the size of structures is limited by the size of the building blocks.
1 nm is about 5 silicon atoms so cannot be physically shrunk much further. I suspect that there are further physics limitations (e.g. you may be an isolation channel to be wide enough to actually isolate by preventing things like tunneling, etc.)
Each additional step requires solving a lot of new engineering problems and usually some new physics too, and it's not a linear process: many techniques are proposed that become cost effective only when previous techniques hit their size limits, and then a lot of research is done and most techniques are never gotten to work at scale and just a few pan out and become the next process.
I'm not expert. I think lithographic processes are very important.
That's why companies like ASML are so critical. The technology to work at these scales and wavelengths is very tricky.
From Wikipedia: "ASML manufactures extreme ultraviolet lithography machines that produce light in the 13.3-13.7 nm wavelength range. A high-energy laser is focused on microscopic droplets of molten tin to produce a plasma, which emits EUV light." (that's UV close to the X-ray range...)
> I think that the radius of a Silicon atom is about 0.1 nm. ~1 nm may be the limit of what's possible. The next nodes after 3 nm are likely going to be 2 nm and 1.4 nm.
the node names do not represent any actual measurement of the chip anymore, and have not for at least a decade. There is no measurement of a "5nm" chip that measures 5nm, and scaling is in fact largely dominated by how close you can place the transistors, not the actual transistors themselves.
Taking the job which allows you to retain a greater portion of the value of your own labor is not "poaching". TSMC apparently decided they weren't worth enough to keep at that price.
We are having a fun day on PTT. A popular Taiwan BBS frequent by lot of real TSMC engineers.
One example, https://www.ptt.cc/bbs/Stock/M.1595925811.A.03E.html, people are poking fun with how only naive people(韭菜)believe the 5nm, 3nm cpu news.