\n\n
5.8 C
New York
Thursday, April 9, 2026

The Intel Split

 

The Intel Split

Courtesy of Ben Thompson, Stratechery

Intel’s earnings are not until next Wednesday, but whatever it is that CEO Pat Gelsinger plans to discuss, it seems to me the real news about Intel came from the earnings of another company: TSMC.

From the Wall Street Journal:

Taiwan Semiconductor Manufacturing Co., the world’s largest contract chip maker, said it would increase its investment to boost production capacity by up to 47% this year from a year earlier as demand continues to surge amid a global chip crunch. TSMC said Thursday that it has set this year’s capital expenditure budget at $40 billion to $44 billion, a record high, compared with last year’s $30 billion.

Tim Culpan at Bloomberg described the massive capex figure as a “warning” to fellow chipmakers Intel and Samsung:

From a technology perspective, Samsung is the nearest rival. Yet a comparison is skewed by the fact that the South Korean company also makes display screens and puts most of its semiconductor spending toward commodity memory chips that TSMC doesn’t even bother to make.

Then there’s Intel, the U.S. would-be challenger that’s decided to join the foundry fray. In addition to manufacturing chips under its own brand, Intel Chief Executive Officer Pat Gelsinger last year decided he wants to take on TSMC and Samsung — and a handful of others — by offering to make them for external clients.

But Intel trails both of them in technology prowess, forcing the California company into the ironic position of relying on TSMC to produce its best chips. Gelsinger is confident that he can catch up. Maybe he will, but there’s no way the firm will be able to expand capacity and economies of scale to the point of being financially competitive.

It’s worse than that, actually: by becoming TSMC’s customer Intel is not only denying itself the scale of its own manufacturing needs, but also giving that scale to TSMC, improving the economics of their competitor in the process.

Gelsinger’s Design Tools

One of my favorite quotes from Michael Malone’s The Intel Trinity is about how “Moore’s Law” — the observation by Intel co-founder and second CEO Gordon Moore that transistor counts for integrated circuits doubled every two years — was not a law, but a choice:

[Moore’s Law] is a social compact, an agreement between the semiconductor industry and the rest of the world that the former will continue to strive to maintain the trajectory of the law as long as possible, and the latter will pay for the fruits of this breakneck pace. Moore’s Law has worked not because it is intrinsic to semiconductor technology. On the contrary, if tomorrow morning the world’s great chip companies were to agree to stop advancing the technology, Moore’s Law would be repealed by tomorrow evening, leaving the next few decades with the task of mopping up all of its implications.

Moore made that observation in 1965, and for the next 50 years that choice fell to Intel to make. One of the chief decision-makers was a young man in his 20s named Patrick Gelsinger. Gelsinger joined Intel straight out of high school, and worked on the team developing the 286 processor while studying electrical engineering at Stanford; he was the 4th lead for the 386 while completing his Masters. After he graduated Gelsinger became the lead of the 486 project; he was only 25.

The Intel 486 die

The Intel 486 die

Intel was, at this time, a fully integrated device manufacturer (IDM); while that term today refers to a company that designs and fabricates its own chips (in contrast to a company like Nvidia, which designs its own chips but doesn’t manufacture them, or TSMC, which manufactures chips but doesn’t design them), the level of integration has decreased over time as other companies have come to specialize in different parts of the manufacturing process. Back in the 1980s, though, Intel still had to figure out a lot of things for the first time, including how to actually design ever more microscopic chips. Gelsinger, along with three co-authors, described the problem in a 2012 paper entitled Coping with the Complexity of Microprocessor Design at Intel — a CAD History:

In his original 1965 paper, Gordon Moore expressed a concern that the growth rate he predicted may not be sustainable, because the requirement to define and design products at such a rapidly-growing complexity may not keep up with his predicted growth rate. However, the highly competitive business environment drove to fully exploit technology scaling. The number of available transistors doubled with every generation of process technology, which occurred roughly every two years. As shown in Table I, major architecture changes in microprocessors were occurring with a 4X increase of transistor count, approximately every second process generation. Intel?s microprocessor design teams had to come up with ways to keep pace with the size and scope of every new project.

Continue here >

Picture at the top via Pixabay

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments

Stay Connected

149,283FansLike
396,312FollowersFollow
2,670SubscribersSubscribe

Latest Articles

0
Would love your thoughts, please comment.x
()
x