Sunny Lin
大家早上好,晚上好。感谢各位参加日月光(ASE)的电话会议。我是瑞银(UBS)负责大中华区半导体板块的林晴(Sunny Lin)。非常荣幸邀请到日月光销售与市场执行副总裁尹昌(Yin Chang)先生。他将与我们分享先进封装创新如何演进以支持云端AI技术。日月光的投资者关系团队,包括Ken Hsiang、Iris Wu和Chiayi Liao也将在线上,与尹先生一起在活动最后回答提问。在会议期间,如果您有任何问题,请随时通过电子邮件sunny.lin@ubs.com向我提问。那么,现在有请尹先生开始演讲。
Sunny Lin
Good morning, good evening. Thank you all for attending the conference call with ASE. I'm Sunny Lin, covering Greater China [ Semis ] at UBS. It's my great honor to host Mr. Yin Chang, Executive VP of ASE for Sales and Marketing. He will be sharing with us how advanced packaging innovations are evolving to support cloud AI technologies. ASE's IR team, Ken Hsiang, Iris Wu, Chiayi Liao will also be on the line as well to take questions with Yin toward the end of the event. Through the session, if you have any questions, please feel free to e-mail me the questions at sunny.lin@ubs.com. So with that, let me hand over to you, Ying, for the presentation.
Yin Chang
翻译中...
Yin Chang
Thank you, Sunny. Good morning, everyone. Thank you for this opportunity to share ASE's view on advanced packaging and how we are driving AI forward. So next page. I think this is really given that AI is really here. AI application is changing how we look at health care, telecommunication, retail, financial services. And this will dramatically increase our AI economy from $189 billion in 2023 to over $4.8 trillion in 2033. This is dramatic 25-fold increases. And this is generated by all the data that we are -- as a consumer put together for AI to consume, learn and then inference for all our future AI application. Next. If you look at how AI is spending, we all know that the AI spending is exploding. In Q2 of 2025, we hit a new high of $87 billion out of only 8 major hyperscale builders from Alphabet to Oracle. The dramatic increase in amount of CapEx for revenues also broke 45% in Q2 2025. We expect this trend to continue, and this is great news for our ASE semiconductor exposure. Next. So if you look at how the data center CapEx and AI semiconductor spend, majority of the spend are in compute, which is where we're going to focus a lot of our talk, followed by networking and memory and followed by power. We also will address a little bit about the power concerns in the AI futures. Next. So this chart shows a very interesting thing. So if you look at 2020, '21, '22 and even '23, the amount of revenue per device kind of correlate very well with the volume. So as you sell more, the value or the revenue increases. But starting from 2024, that trend dramatically changed that even though we don't sell as much, but the value or the money people wanting to spend on the devices dramatically increases. So even though the growth for the volume is modest, the amount of revenue we generated are tremendously higher, which means that the value add companies such as ASE can put into an AI system also are being monetized and being valued in the AI equipment market. So this shows a tremendous promise for us going into the future. Next. So what are the key demand challenges in AI? Well, obviously, performance is the #1 requirement. The dramatic increase in compute through the latest large language model push us to create higher network bandwidth in memory and the amount of capacity in HBM that we put on each particular GPUs or accelerated chips. And this creates a problem in area because the number of chip we put down, so if you put from 4 HBM to 8 HBM to 10 HBM to as much as 16 HBM, the area of that package dramatically increases. So now we are looking at 100 by 100 millimeter square and how do we deal with those large areas. And the next is power. For power, we are now looking at how do we power so many chips within just one blade to now thinking 72 blades together or 72 chip together or up to 154 chip together in the latest NVIDIA requirements. And obviously, if you put in power, the thermal will be the next key consideration for the packaging challenges in the AI era. Next. One more tab. So this kind of shows you why compute is being driven so quickly. And this kind of shows you the data set and the compute performance by various large language models. And it's growing at 3 to 4x per year. And at the current rate, we're looking at 1.7x in number of chip quantity just for the AI industry. And then we need to improve performance of each chip by almost 1.5x per year, which is significantly faster than Moore's Law, which is a challenge for the chip designer, the chip foundry and -- but this is where advanced packaging can really come into effect. How do we put everything together, achieving this performance requirement without all the benefit of Moore's Law. And this is the benefit of advanced packaging. Next. So just look at the AI model for this. You kind of see just from April '23 to October '25, which is a little bit over 18 months, the performance has increased almost 50% in terms of how fast the model has improved from OpenAI, from X AI, from Alphabet. And this is what the compute consumptions are, and this is what the requirement is putting together to the silicon players and to ASE as a company to come up with solution that can provide the compute that's ever increasing and insatiable demand of compute power by this open AI or AI model requirements. Next. So what we talk about with the compute, if we're able to achieve the amount of compute that we see in the previous slides in the various models, what you really require is the amount of data you need to feed those AI accelerators. So what we kind of see, if you look at the HPC, the high-performance computing HBM road map, the number of HBM per each generation continue to grow. So from the left-hand side, which is the MI300 and 350, we didn't put the 450 on there. But if you look at the medium road map, they will be reaching 16 in Rubin 300. And in the various other models, we're also looking at least 12 HBM in the coming year. This is we drive the area portion of our challenges as advanced packaging. So if you look at the very bottom, on the NVIDIA Rubin Ultra, the substrate area is 153x77. The interposer size is 124x 50. This is a tremendous advanced packaging challenge for us to put this many die in such a small space. Next. So you look at the HBM integration trend, the reason we are driving to HBM4 is trying to leverage the faster and faster bandwidth and allow us to transmit as many data as possible up to 1.5 terabytes per second. And by putting 16 HBM into 3 chiplet, which is that cartoon I show, this creates a much larger interposer requirement and also require more RDL layers to connect all these chips together. And this is the advanced packaging. And this actually creates a challenge but also opportunity for system architect or chip architect to create a unique chiplet and memory solution to support the next-generation AI compute requirements. Next. So the key challenges for large-size modules, what the big challenge is, as it gets bigger and bigger, the number of chip per wafer for 300 wafer drops significantly. So if you go by 100x100, there's already only 7 modules or 7 die per wafer and keep dropping as the package gets bigger and bigger, interposer get bigger and bigger. So this is the challenge for us is how do we maintain the yield while reducing the number of chip or interposer per wafer. So we have a solution for that in the coming slides. Next. So with the compute solution, one of the challenges for us is how do we deliver power. Power is very key to the success of the AI chipset. So we all know that when you route copper through the substrate, you have routing losses. And the distance from the VRM to the chip is very important. So how do we reduce the distance and reduce the voltage losses is a key challenge for us. So for us is how do we put this power solution as close to a silicon as possible. And what are the solutions that we can come up with that can put up a vertical voltage regulation onto the package itself. So this challenge is something that we need to figure out and deliver precise power into this complex chipset and HBM structure. Next. So if you look at the overall AI compute rack power, what you will see is that if you look at -- in 2020, we only look at 10 kilowatt per rack. But by 2024, you look at Blackwell, it's already 120 kilowatt per rack. And this is really driven by a number of chips within that same rack. And then if you look at what the future holds, we're already seeing 600-kilowatt rack solution and a megawatt rack solution will not be far behind. So one of the key things for us is how do we deliver this solution not only to the chip itself, but also find out the power solution to the rack. And I think that with our experiences in some of the high-voltage applications in other industry suit us very well in trying to create a power solution for this higher and higher voltage requirement into the most complex rack AI that will be coming into the marketplace in a very short time. Next. So with power thermal. So we are looking at how do we do thermal solution in the very increasing watts and voltage environment. So this chart is courtesy of AMD. And what it shows is if you look at the red bar on the left, that is the CPU power. As we go in time, the power gets higher. And then if you do the green bar, that's a GPU power and then GPU power actually gets increasing into 1,500 watts. But ironically, the higher the watt, the temperature need to be operated actually drops. So if it's lower voltage, you actually can run the chip hotter. But with higher voltage, you actually need to run the chip cooler. So create even bigger problem for us in trying to run the chip at optimum temperature with an increased power consumption. So this is something that the industry needs to work through and ASE is going to participate in how do we also work on the overall package thermal consumption and thermal requirements. Okay. Next. So one of the things that we will do for the compute problem that we discussed earlier is we leverage the VIPack that we announced back in 2022. VIPack is a collection of advanced packaging technology from FOPoP, package on package to 3D ICs to FOCoS-Bridge to package optic and to FOCoS SiP and just FOCoS, which is the fan-out chip on substrate. Next. And what we're really focused right now is 2 type of VIPack. One is the FOCoS, which is fan-out chip on substrate. This is what's very common today in a lot of the AI solution. And you kind of see some of the cross-section that ASE has done to create the latest chiplet architecture or heterogeneous integration solution that combine whether it's I/O buffer die with graphic accelerators or graphic accelerator with HBM. And then to the right, FOCoS-Bridge is our next solution or the solution of choice for some of the higher density solution where we are using a silicon bridge between HBM and a graphic accelerator to maximize the I/O count between the connection while minimize the RDL that is needed to route between those 2 die or collection of those die. Maybe there will be 3 GPU with 16 HBM. And kind of show you the cross-section between the Bridge and a C4 pump, and that show you that our pitch can be as low as 130 microns. Next. So this just show you the FOCoS extension platform, specifically talking about Bridge and they kind of show you the package size that go to 100 by 100 millimeters, and they kind of show you various constructions or various opportunity for the chip architect to create. It kind of show you that we can have maybe a GPU accelerated die with some memory controllers with memory itself or just parallel GPUs with memory, like a device [indiscernible] or a collection of chiplets with I/O, SRAM GPU, neural network chips, all connected through FOCoS-Bridge solution where the Bridge are connected -- are connecting the chip to each other and to the next die next to it. So these are the opportunity that we see with FOCoS-Bridge that creates a tremendous amount of creativity for the AI chip of the future. Next. So one of the challenges for that is as the package gets bigger, the utilization on a 300-millimeter wafer start to drop. And we mentioned earlier, if once you get to 5 or 6 reticle size, the number of those chips per wafer is dropped down to 8, 7, possibly 6 and that's only 57% of the utilization. So we really need to figure out a way how do we increase our utilization. So ASE has been working on a panel solution. We have demonstrated 300-millimeter panel and also 600-millimeter panel that shows that we can increase the overall utilization from 57% up to 87%. And this dramatically allow us to produce this complex solution in scale. And that is the key for ASE is how do we scale this into as high a volume as possible. Next. So this shows the actual example of panel FOCoS-Bridge. This shows basically 2 chiplets with HBM onto a large panel. So you have 2 SoC dies and SoC 1. So this creates -- and then we put 10 chiplet with 10 silicon bridge onto that one section. And the middle chart shows you the whole panel, how we put it together. And this is the fan-out construction and it's by laser direct via solution. So it's -- this is an increase in number of unit per panel versus a wafer on the Bridge construction. Next. So as a panel road map for ASE, we're looking at 310x310 for the HPC and AI for fine-pitch for 2 micron and 2-micron line space with Bridge and IPT. And then we are looking at large panel for fan-out MCM or some people call it wafer MCM that allow us to do mobile application or edge AI application that doesn't require a fine line -- line and space and allow us to do the full fan-out RDL as a substrate. So you actually create a very thin multichip modules with complex RDL underneath. So you kind of show you the 600-millimeter fan-out MCM and also the chip glass and Bridge solution that we mentioned earlier that give us the 310 and then moving to 600. Next. So once we are able to put all the chip onto a panel or wafer or modules, the next thing we really want to look at is how do we put the power solution to it? How do we drive all the chip with the necessary power with the first and second stage regulator. So we have created powerSiP that basically allow us to put all of the first and second stage regular directly underneath the substrate. So we are creating the minimum amount of distance between the power source and the silicon itself. So instead of putting a side-by side, as we show on the right-hand side, where we take a 12 volt down to 1 or 0.8 volts or even just do first stage and second stage, now we are actually putting both of them underneath as a vertical regulated modules, what we call a powerSiP that allow us to deliver the power at the most -- closest to the silicon and reduce the overall loss and then achieve a maximum efficiency of the power delivery. Next. And then this kind of shows you the latest thing for the power solution on data center. So if you look at today on top, if you look at the alternating current from the grid to the data center. And basically, we're dropping it from plus or minus 400 down to maybe 480 or lower or plus or minus 220, so 440 solution onto the data center and driving it at 400-volt direct current. And one of the things that will be more efficient is convert that directly from plus or minus 400 AC current directly down to 800 direct current. And then using a solid-state conversion and then drive the whole backbone of data center using higher voltage. And this allow us to create more power efficiency throughout the grid and also creates growth in the overall infrastructure going to the future. And a simpler distribution system and with a fewer point of failure will create a more robust data center solution. And this is also aligned with overall thinking in terms of leveraging what we already learned in some of the other industry that also leverage the 800-volt systems. So ASE is in a prime position to work with customers in developing this 800-volt DC systems. Next. So this kind of shows you one example of using gallium nitride to our silicon carbide as a primary chip solution and module solution to create a monolithic low-voltage conversion into the silicon itself, but also allow us to drive from 48 volt down to 12 or 6 or 0.71. And this actually creates a better solution, a more solid-state solution instead of going through AC conversions. And for us is this gives ASE another opportunity to create more value within the data center ecosystems. Next. So driving power itself through electrons is one challenge. But another way to solve that same challenges is trying to convert the electron into photons. So ASE has put in a tremendous effort in trying to work on full-package optic or basically trying to communicate data transfer through photons. And we believe that is the future of data center, the combination of electron and photons. Next. So this kind of shows you all the various toolbox that ASE has demonstrated through the passive alignment for fiber attach to creating a cavity through laser direct edge to chip-on-wafer trying to put electric IC on top of photonic IC through a fan-out POP solution. You kind of show the various cross-section, and that creates a silicon photonic engine that can be used in part of the CPO solution that will show a little later. And then give you the various way to put in the laser diodes that provide the laser source for the photonic. And obviously, the submicron accuracy is important for all of the die attach or the chip-on-wafer integration. So these are the various tools that ASE already developed that can help our customers to create the next-generation optical solution for the AI hypercenter. Next. So there are 3 key challenges in trying to create optical engine onto a CPO. So one is really to deal with warpage. There's warpage on the optical engine. There's a warpage on the substrate, which is organic substrate. And then all these things have an impact the way we do fiber attach, whether we do active alignment or passive alignment, these are the key challenges for us to put together. So you kind of see the large different ring on top of this CPO demonstration that we did for the customer. And these are demonstrating that we could show on the next page. Next page. So this is the thing that we did for CPO test vehicle, where you kind of see the network IC in the middle with different optical engine, and that allow us to connect 8 different optical fiber onto this switch solution. And you kind of see what we did with the [indiscernible] rings to maintain the warpage on -- not only the optical engine, but on the substrate itself. And this is really a large package of 75x75 package size, and this is all joined by copper pillars with the fan-out POP solution. Next. And this kind of shows you just kind of the wealth of solution that we are offering in terms of toolbox, whether it's ASIC on chip on substrates, whether it's optical engine, and it kind of show you on the same thing how each of the optical engine and the substrate are connected through copper pillars and copper bumps. And so those are demonstration that ASE can execute a large panel network, basically CPO solution in the next-generation AI solutions. Next. So why do we want to do that? Because we believe that the high-density RDL packaging such as VIPack is only one solution that we can do, but that's not everything we can do. So we need to put in the photonic system. By putting the photonic system, we can dramatically increase the overall compute performances while reducing the power because we don't have the same losses through photons as we do electron through copper wires. So with the combination of the high-density RDL such as VIPack and the CPO that we showed you earlier that this can dramatically increase the overall compute to meet the latest LLM compute requirements. Next. So lastly talk about thermal. So the thermal is that we use the same chart with GPU power over 1,500 with CPU power over 600. There are many ways that we are looking at it. So today, ASE is really looking at system solution, which is the standard cold plate that sits on TIM 1 that sits on top of the TIM 2 sits on top of the heat sink that sits on top of the die itself. So right now, we are looking at various material that can improve the overall heat conductivity between the silicon and the cold plate itself. But we are also examining the potential of the silicon solution where the coolants actually are directly in touch of the silicon itself. So instead of have 2, 3 or 4 different thermal interfaces, we are able to bring the coolant directly to silicon to dissipate the heat and allow the chip to run at its optimum temperature. And this is the next generation where you kind of see the solution is migrating from system to chip, which offer ASE another opportunity to develop silicon level solution and able to produce the next-generation compute power that's needed for the future AI requirements. So for the thermal TIM solution that we have talked about, today, we're looking at standard dispense method. We have developed the graphite method. We have done the solar -- that you kind of see the ability for us to increase the overall thermal conductivity from below 10 to right around 86. So we are looking at various level trying to improve the thermal conductivity between various interfaces. But as I mentioned earlier, the potential is not just improving the thermal coefficient in the interfaces, but also bringing the coolant directly onto the silicon itself, and that will be the next-generation development. So we look at the overall packaging innovation and the packaging architecture and technology, if we look at performance, we already see the latest AI model push our overall performance by 2 to 7x. And this is what is required for the AI chip to meet. And then for that, we need to drive the memory. And for the memory to increase, then we need to drive the area. And once you put the compute chip together, then we need to figure out how to deliver the power suite or the precise power directly onto this array of silicon on top of the modules. And then once you put in the power, obviously, the thermal will be the next consideration. And so with the packaging itself, we kind of show you how we are able to use the heterogeneous integration that combines various functions, whether it's CPU, GPU, XPU or various I/O or memory chip solution along with HBM put together in whether it's in chiplet or in a fan-out solution with bridge or using 2.5D silicon interposer. And if the package get too big, then we need to look at 300-millimeter panels or 600-millimeter panel to leverage the overall efficiency and maximize the yield and also the scaling of the overall solution. Then with power, we are demonstrating the vertical voltage regulators and that is the power and we try to put in the power regulator as close to silicon as possible and trying to create the backside power that's needed. And if I then convert electrons into photons like CPO, that also reduce the overall power consumption in a given compute. Obviously, when we give more compute, then the CPO requirement will also increase. And last thing is the thermal. So thermal is something that ASE is looking into in terms of how do we figure out the next-generation cooling structure beyond the thermal interfaces and the thermal interface material that we already are producing through silicon microchanneling or possibly even new material set. And this gives you a summary of what we are looking at in terms of innovation to fulfill the compute needs for overall industry. So in summary, we kind of see AI and data continue to fuel the semiconductor innovation. The proliferation is given with the amount of money that the overall industry is asking us to produce the next breakthrough in terms of solutions, and we are accelerating through really the heterogeneous integration advancement. We're putting various types of function die together, various size of function die together. We're putting side-by-side. We're putting on the 3D format. And this type of heterogeneous integration is the innovation, I think, that fuels the AI data growth. And last is, we truly believe that package creativity is the enablement for AI growth or AI path. So it really helps us in terms of enhancing the functionality and also improving the overall efficiency of any particular compute silicon solution. With that, I thank you for your attention and time.
Sunny Lin
好的。非常感谢尹总精彩的演示。现在让我们进入问答环节。再次提醒,如果您有任何问题,请随时通过电子邮件联系我:sunny.lin@ubs.com。那么我先开始提问。Ken,第一个问题想请教您。既然您在这里,有很多关于日月光LEAP和测试业务到2026年将如何扩展的问题。10月份时,管理层曾指引到2026年将在今年16亿美元的基础上增加超过10亿美元的销售额。那么,在财报发布后,您能否分享一下到2026年的展望有何变化?我们应该如何看待外包业务、全流程CoWoS以及最终测试等业务的增长路径?
Sunny Lin
Sure. Thank you very much, Yin, for your great presentation. So now let's move on to Q&A session. So once again, if you have any questions, please feel free to e-mail me at sunny.lin@ubs.com. So let me kick off. So maybe, Ken, first question for you. Since we have you, lots of questions on how ASE's LEAP and testing segment will scale going to 2026. In October, management did guide at over $1 billion sales upside going to 2026 on top of this year's USD 1.6 billion. And so, could you perhaps share with us how the outlook has evolved going to 2026 after you reported? And then how we should think about the ramp for the business across maybe outsourcing, your full process CoWoS and final test?
Kenneth Hsiang
关于2026年的展望,目前我们确实没有提供太多详细信息,正如您所提到的。目前我们唯一谈到的评论是,领先的先进封装业务明年将增长超过10亿美元。其构成部分尚未详细讨论,但可以说主要由我们传统的LEAP服务推动,包括基板相关服务,以及与此类设备相关的测试服务。到下半年,我们应该会看到全流程型应用和服务的增长更加显著。
Kenneth Hsiang
So the commentary for 2026 thus far, we have not given a tremendous amount of color as you've mentioned. The only real comment as of now that we've talked about is that leading-edge advanced packaging will be growing by more than $1 billion next year. The components of that are -- haven't been particularly talked about, but I think it would be fair to say that they are being led by our traditional LEAP services, meaning on substrate and also, to a certain extent, the testing related to such devices. Towards the back half of the year, we should see a much more pronounced ramp-up in our full service type applications and services.
Sunny Lin
明白了。另外,关于全流程型封装到2026年下半年的增长,您能否谈谈技术方面的情况?这将主要由传统FOCoS驱动,还是FOCoS与FOCoS-Bridge的组合?我们应该如何看待贵司FOCoS-Bridge的技术准备情况?您是否认为现在良率已达到良好水平,因此看到客户参与度在增加?
Sunny Lin
Got it. Also, in terms of the ramp of full service type of package going to second half of 2026, if you may, how should we think about the technology? Will it be driven by maybe more traditional FOCoS or will it be a combination of FOCoS and FOCoS-Bridge? And how should we think about your technology readiness for FOCoS-Bridge? Would you say now the yield has reached a good level and therefore, you are seeing increasing customer engagement?
Kenneth Hsiang
正如尹总在演示中提到的,我们确实认为FOCoS-Bridge是AI系统架构在芯片层面整体增长中相当重要的一部分。它在处理单元和存储芯片之间提供了显著的性能提升。这对于我们未来的持续增长尤为重要。我们尚未讨论2026年的具体构成,但全流程方面,包括贵方在内的许多卖方分析师都曾撰文讨论这一趋势。我们目前没有新的信息,但我们确实认为这将变得越来越重要。关于良率,我们通常不披露具体数据,但我们在2025年期间正在完成或提供全流程工作。到2026年,我们可能会看到不同的客户产品组合,或许会在下半年开始增长。
Kenneth Hsiang
FOCoS-Bridge, we do believe to be, as Yin mentioned in his presentation, quite an important part of the overall ramping in terms of AI system architecture at the chip level. It does provide incredible increased performance between the processing unit and the memory dies. So this is something that is particularly important for our ongoing ramps going forward. We have not talked about, again, the makeup of 2026, but the full process, I think many sell side, including yourself, have written up articles on this particular trend. But we do not -- again, we don't have any new information, but we do believe that this should be increasingly important. In terms of yield, we are -- we have -- we don't generally disclose yield, but we do have full a process work that we are completing or providing right now during 2025. 2026, we should see maybe a different set of customer products, maybe ramp towards the back half of the year.
Sunny Lin
关于LEAP和测试业务的利润率,公司获得了更高的利润率,因此相对于IC ATM业务具有增值效应。但IC ATM业务的毛利率在2025年处于较低基数。因此,当管理层谈到该业务板块具有利润率增值效应时,是否可以说它甚至高于IC ATM结构性毛利率范围的高端(约30%)?所以该业务板块的毛利率应该超过30%。这是问题的第一部分。第二部分是,我们应该如何看待该业务板块到2026年的利润率前景?考虑到规模扩大、良率提升以及全流程的逐步完善,我们是否应该预期利润率会更好?
Sunny Lin
So on margin for LEAP and testing, the company has got a higher margin and so accretive versus IC ATM. But IC ATM gross margin is at a low base for 2025. And therefore, when management talk about the segment being margin accretive, is it fair to say it's higher even compared with the high end of the range for IC ATM structural gross margin being about 30%? So the segment gross margin should be over 30%. And so that's the first part of the question. And then the second part will be, how should we think about the margin outlook for the segment going to 2026? Should we expect maybe better margin given larger scale, maybe better yield and also ramping of full process?
Kenneth Hsiang
先进封装技术对我们的结构性利润率具有增值效应。结构性利润率方面,我们通常以整体产能利用率约70%对应24%的利润率(结构性范围的低端),而产能利用率达到85%左右时对应30%的利润率上限(结构性范围的高端)。但整体先进封装技术及其所有组成部分确实创造了增量利润率,或者说对整体结构性组合具有增值效应,对吧?这意味着每个组成部分的利润率都相对更高。关于明年的展望,我们不会过多评论。但总体而言,我们相信考虑到汇率逆风因素应该已经过去,我们应该会看到一个对我们利润率结构更加有利的环境。今年,我们已经展示了相当程度的利润率恢复,特别是如果剔除或调整汇率因素。明年,我们应该会继续看到整体利润率环境改善。而且我认为Joseph提到过,明年的全年利润率将完全处于结构性范围之内。
Kenneth Hsiang
So leading -edge advanced packaging is accretive towards our structural margins. So structural margins, we generally talk about in terms of maybe a 70% overall utilization being tied to a 24%. The trough of the structural range and then full utilization at around 85% or so tied to a 30% ceiling margin in terms of the structural range. But leading-edge advanced packaging in total, all the components do create incremental margin or are accretive to the overall structural mix, right? So that would mean that each of those components do -- are higher, so to say. In terms of what we're looking at for next year, again, we're not commenting a lot on that. But in total, we do believe that given that the -- ideally, the FX headwinds are behind us, and we should see a much more friendly environment for our margin structure. So this year, I think we did show a decent amount of margin recovery, especially if you do take the FX component out or adjust for the FX component. So next year, we should continue to see the overall margin environment improve. And then I think Joseph talked about next year having full year margins well within the structural context.
Sunny Lin
明白了。那么接下来问一个关于您提到的高性能计算应用的面板级封装问题。ASE在300mm晶圆用于高性能计算的技术准备方面处于什么阶段?基于当前的技术开发和客户参与情况,您认为我们应该何时看到第一波产品迁移?会是人们常说的2028年吗?还是您认为需要更长的时间?
Sunny Lin
Got it. So maybe moving on to a question on panel-level packaging for HPC application that you talked about. And so where is ASE in terms of the technology readiness for, let's say, 300 for HPC? And based on your current technology development and also client engagement, when do you think we should see the first wave of product migration? Will it be maybe 2028 that people talk about? Or do you think it would take a bit longer?
Kenneth Hsiang
我认为面板是我们整体交付、服务和产品组合的一部分。在向整体面板服务和全面准备就绪迈进的过程中,我认为目前我们今年会有设备到位。明年我们会进行一定程度的认证,然后在明年年底可能会有少量收入。但到目前为止,我们还没有看到大规模迁移,但这并不意味着这不会发生。我认为这很大程度上取决于整个生态系统是否准备就绪,包括机械设备等可能不在我们控制范围内的因素。但再次强调,这是整体规划的一部分。我认为随着面板技术变得更加成熟,它将为前沿先进封装类型的服务渗透到不同层次的产品中提供机会,而不仅仅局限于电子技术或电子应用金字塔的顶端。因此,我们可能会看到技术向移动应用或其他应用领域扩散。我认为这可能有助于前沿先进封装技术以更快的速度增长。
Kenneth Hsiang
I think panel is part of an overall set of delivery, set of services and products that we offer. In terms of moving towards an overall panel service and full readiness, I think right now, we have equipment coming in during this year. We have some level of qualification for next year and then a very -- maybe a minor level of revenue towards the end of next year. But as of this point, we haven't seen a mass migration yet, but that's not to say that this won't happen. I think there's a lot that has to do with the overall ecosystem being ready, meaning machinery, meaning things that may not be within our control. But again, this is part of an overall view. I think as panel does become more ready, I think it provides an opportunity for leading-edge advanced packaging type services to permeate into different levels of products, not just within this the very peak of the pyramid, so to say, in terms of electronics technology or electronics usage. So maybe we might see things kind of drift off towards maybe a mobile application or other applications. And I think that might be -- might allow for leading-edge advanced packaging to grow even faster.
Sunny Lin
明白了。那么Ken,如果我们退一步看CoWoS或FOCoS,可以说日月光(ASE)的产能爬坡可能比一些同行稍晚一些。现在随着HPC可能向面板级迁移,您是否认为日月光已经开始非常努力地工作,以便能够应对第一波机会(如果到来的话)?
Sunny Lin
Got it. So Ken, if we take a step back and look at CoWoS or FOCoS it may be fair to say ASE ramp maybe a bit later than some of your peers. And so now with the potential migration to panel level for HPC, would you say ASE started working very hard to be able to address the first wave of opportunities if it comes?
Kenneth Hsiang
我认为我们的立场一直相当稳健。我们喜欢在技术成熟时采取行动,而不喜欢过于超前。我认为那些情况会导致公司整体回报不尽如人意。我们倾向于看到机械设备或标准得到相当充分的发展后,再真正扩大规模。因此,从我们的角度来看,我们正处于我们喜欢的位置。我们不一定有明确的时间表。但我认为如果市场确实需要我们扩大规模,我相信我们可以与我们的代工合作伙伴一起在这个特定领域做好准备。
Kenneth Hsiang
I think our position has always been fairly steady. We like to do things when -- we don't like to get very much ahead of the technology. I think those situations result in less than optimal returns for the overall company. I think we do like to see machinery or standards become fairly well developed before we really scale things up. So from our perspective, we are where we like to be. We don't necessarily have a timeline in place. But I think if the market does call upon us to scale up, I think we can be ready along with our foundry partners in this particular area.
Sunny Lin
明白了。那么关于HVDC这个非常有趣的话题,也许您可以分享更多关于OSAT或日月光如何能发挥更重要作用的细节。与当前解决方案相比,HVDC的内容价值如何?我们是否应该假设HVDC的封装会更加复杂,从而为您提供扩大价值的机会?
Sunny Lin
Got it. And then on this very interesting topic around HVDC, so maybe if you could share a bit more color on how OSATs or ASE could play a more important role. What's the content, let's say, between the current solution versus HVDC? Should we assume the packaging for HVDC to be more complicated and therefore, opportunity for you to expand value going forward?
Kenneth Hsiang
我不太清楚您刚才使用的缩写,H——您是怎么发音的?
Kenneth Hsiang
I'm unaware of the abbreviation you used there, the H -- what did you pronounce it?
Sunny Lin
高压直流输电。基本上,Yin在演讲中提到,未来数据中心可能会迁移到800伏,以提高电源效率,甚至减少覆盖范围。
Sunny Lin
High Voltage Direct Current power delivery. So basically, Yin talked about in the presentation that potentially data center could migrate to 800 volt in the coming future for better power efficiency, even less coverage.
Kenneth Hsiang
我认为从我们的立场来看,从自然的角度出发,电压转换越来越靠近芯片本身变得日益重要。我想Yin已经强调了这方面的几个关键点。随着能效变得越来越重要,不仅是从成本节约的角度,更是从全球能源——我在这里想说的是,一种环保的角度来看,因为AI预计将消耗相当于核反应堆的电力,我认为这种能效交付能力或提供这种能力的方式变得越来越关键。从几何规模的角度来看,ASE作为连接这些消耗大量电力的芯片的桥梁,我认为我们处于有利位置。无法采用单片式方法为这些芯片供电,这使得ASE成为此类技术或电力传输方法及其变革的自然提供者。因此,我认为随着这些产品的进一步发展,这对我们来说是一个巨大的机遇。我在这里没有太多额外信息可以提供,但我们正在这个领域的多个关键方面开展工作。
Kenneth Hsiang
I think from our -- from where we sit, just from a natural perspective, voltage conversion getting closer and closer to the die becomes ever more important. I think Yin made that -- he highlighted a couple of key points on that. And then as power efficiency becomes more and more important to not just from a cost savings perspective, from like maybe a global power what am I looking for here, kind of an eco-friendly type perspective in which AI is projected to consume nuclear reactors worth of power, I think this type of power efficiency delivery or the capability to deliver that becomes increasingly important. I think ASE is position in terms of where we sit just from a geometric scale perspective as being the bridge to these dies that are consuming a lot of power. I think being -- not being able to do the monolithic methodology of doing -- providing power into those dies makes ASE a very natural provider for such technology or electrical delivery methods or changes in those methods. So I think this is a very high opportunity for us as these products become or develop further and further. So I don't have a lot of extra information for you here, but we are working on a number of key fronts in this area.
Sunny Lin
当然,没问题。那么回到全制程的话题,鉴于你们的产能爬坡备受关注。您能否分享更多关于到2026年底你们正在爬坡的产品类型或客户信息?你们的一些竞争对手谈论了很多关于到2026年扩展产品线超越A系列加速器的情况。那么对于你们的全制程爬坡,您能否分享更多关于到明年下半年你们正在爬坡的产品类型和客户信息?
Sunny Lin
Sure. No problem. So maybe back to full process, given a lot of attention for your ramp. So would you be able to share maybe a bit more on what are the type of products or clients that you are ramping going to late 2026? Some of your competitors talk a lot on the expanding product base beyond like A accelerators going to 2026. So for your ramp on full process, would you be able to share a bit more on what are the type of products and clients that you're ramping going to the second half of next year?
Kenneth Hsiang
再次说明,我们并未就2026年具体涉及哪些产品提供太多细节。但我们确实相信我们的代工合作伙伴总体上相当繁忙。鉴于整个行业资源短缺,我们面临很多机遇。因此,目前我们不会具体说明哪些客户,但我们在这一领域确实拥有非常广泛的接触面。
Kenneth Hsiang
Again, we're not -- we have not given a lot of color in terms of 2026 in terms of which products we are involved with. But we do believe our foundry partners are fairly busy overall. There are a lot of opportunities available to us given the lack of resource across the entire industry. So at this point, we don't -- again, we don't specify exactly which customers, but we do have a very wide breadth of exposure in this area.
Sunny Lin
当然,没问题。关于CPO(共封装光学)方面。演示中也展示了在整个过程中的多种封装机会,包括EIC和PIC堆叠、FAU组装,以及可能涉及基板的整体封装。那么我们应该如何看待CPO的业务模式?您是否认为可能存在多种业务模式,就像CoWoS和FOCoS一样,你们可以与代工厂合作,也可以尝试推进全制程。基本上,我们应该如何看待CPO的机遇以及你们的定位?
Sunny Lin
Sure. No problem. And maybe on CPO. So presentation also showcased several packing opportunities along the process from EIC and PIC stacking, FAU assembly, the packaging overall for maybe on substrates. And so how should we think about business model going to CPO? Would you say it could be multiple type of business models, just like for CoWoS and FOCoS, you could work with foundry, you could also try to ramp full process. Basically, how should we think about the opportunities going to CPO and your positioning?
Kenneth Hsiang
我认为,从我们的角度来看,硅光子学确实是一个非常有趣的领域,这一点再次被凸显出来。靠近芯片或与这些处理单元接口,使我们处于一个非常独特的位置,能够作为硅光子解决方案的一部分提供接口。目前有许多不同的标准和方法正在讨论和开发中。在这些情况下,我们尽量保持中立,不偏向任何一种方案。我们只是希望在量产后、有回报可图时,能成为最终解决方案的一部分。目前,硅光子学对我们来说收入水平仍然相对较小,所以我们没有过多讨论我们看到的最终解决方案。但当事情开始大规模推进时,我认为我们可以再谈。从收入角度来看,我不认为硅光子学是2026年展望的主要部分。
Kenneth Hsiang
I guess, I think from our perspective, silicon photonics is particularly, again, a very interesting area that has been highlighted. Being next to the die or interfacing to these processing units, it puts us in a very unique area in terms of being able to provide interface as part of the silicon photonics solutions. There are a number of different standards and methodologies being talked about and developed at this point in time. We don't have -- again, in terms of these types of situations, we try to be fairly agnostic. We don't try to push one or the other. We just want to be part of the endgame solution when there are volumes and when there are returns to be had. At this point, the silicon photonics revenue levels for us are still relatively small. So we're not talking a lot in terms of the end solutions that we're seeing. But when things do start ramping up in a more major way, I think we can talk about that then. And we are not -- I don't think silicon photonics in terms of a revenue perspective is a major part of the '26 outlook at this point.
Sunny Lin
当然,没问题。如果可以的话,我想稍微转换一下话题,谈谈测试。Ken,大家对你在未来几年内加速器最终测试方面的进展有很多期待。我们都理解有些合作可能需要时间。但你能和我们分享一下你在A加速器最终测试方面的最新进展吗?
Sunny Lin
Sure. No problem. Maybe if I may switch gear a bit to test. So Ken, lots of expectations on you ramping, if any, on final test, especially for A accelerators in the coming few years. We all understand some engagement may take time. But could you share with us what's the latest progress on your ramp for final test for A accelerators?
Kenneth Hsiang
从我们的角度来看,我们在这个领域进行更多最终测试方面正在取得进展。然而,考虑到我们的建筑和设施准备就绪的时间线,以及其他产品进入的时间线,我认为我们目前的重点可能更倾向于晶圆探针测试。我们应该会在2026年看到显著的晶圆探针测试扩张,然后在今年下半年在AI方面看到更多最终测试的进展。但这些都取决于时间线和客户产品等因素。不过,我们对测试领域的整体增长机会感到相当兴奋。目前,测试占ATM总收入的比例将在年底接近18%到19%的范围。更自然的比例可能是测试和封装之间接近30%或1/3、2/3的关系。所以,如果我们只测试我们封装的产品,我们还有很大的增长空间。我们将继续推进这方面的工作。再次强调,我们的整体测试战略是关于整体测试,而不仅仅是专注于前沿技术或投资者目前可能特别感兴趣的某个客户。
Kenneth Hsiang
I think from our perspective, we are seeing progress in terms of being able to do more final tests within this space. However, given the time lines in terms of how our buildings and facilities are coming ready and then time lines in which other products are going to be coming in, I think the focus right now that we have and what we're seeing is probably more geared or more focused towards wafer probe. We should see significant wafer probe expansion during '26 and then probably see a little bit more final test exposure on the AI front towards the back half of the year. But these are all subject to time line and customer products and stuff. But we are fairly excited in our overall growth opportunities within test. I think when we're -- right now, we're going to finish the year closer to an 18% to 19% range in terms of how test is part of the overall ATM revenue. A more natural percentage is probably closer to maybe a 30% or 1/3, 2/3 type relationship between testing and assembly. So we do have quite a bit of growth opportunity if we just test the products that we package. So we will continue to push that forward. And again, our overall test story is about overall test, not necessarily just focused on leading edge or maybe whatever customer that investors may be particularly interested in at this time.
Sunny Lin
当然。没问题。但我想对于晶圆探针测试来说,即使是代工厂,其晶圆厂空间也相当有限,因此这确实释放并增加了OSAT的需求机会,你们一直从中受益,但我觉得你们的一些同行今年似乎也在受益。那么我们应该如何看待接下来的情况?您认为市场是否足够大,能够容纳多个晶圆探针测试供应商,因此对你们的扩产速度不会有影响?还是您预计在某些时候可能会出现一些需要我们关注的竞争动态?
Sunny Lin
Sure. No problem. But I guess for wafer probe, even the foundry, the fab space is quite constrained and therefore, it's indeed releasing, increasing demand opportunities for OSAT that you have been benefiting, but I think some of your peers also seems to be benefiting from this year. So how should we think about from here? Do you think the market is big enough to accommodate multiple suppliers for wafer probe and therefore, there should be no impact on the pace for your ramp? Or would you expect maybe at some point, there may be some competitive dynamics that we need to watch?
Kenneth Hsiang
我认为作为全球最大的封装厂商,我们在承接更多晶圆探针测试机会方面具有独特优势。我们也相信我们整体的无人工厂或可能是全自动化解决方案确实有助于提升我们在晶圆探针测试方面的成本和性能表现。因此我们继续预期晶圆探针测试业务将持续扩张。作为整体一站式解决方案的一部分,日月光作为最大的封装厂商,我们应该再次看到——我们应该在接收晶圆探针测试以及最终测试服务方面具有独特优势。
Kenneth Hsiang
I think wafer probe being the largest packager out there, we are uniquely positioned to take on more wafer probe opportunities. We also believe that our overall labor-free or maybe lights out type solutions do help contribute to this type of the cost and performance of wafer pro be for us. So we continue to expect wafer probe to continue -- to keep expanding. And as part of an overall turnkey type solution again, ASE being the largest packager, we should see -- we should be uniquely positioned again to receive wafer probe along with the final test in terms of services overall and test.
Sunny Lin
当然。明白了。没问题。我其实收到了一个投资者的问题。可能有点技术性。所以如果Ken或Yin能回答的话。是关于电源传输的。Yin,你在演讲中提到日月光能够提供电源套件,包括作为背面电源传输的电压调节器。那么对于这些电压调节器,你们会从IC制造商那里采购吗?还是日月光能够内部制造?
Sunny Lin
Sure. Got it. No problem. So I actually got a question from an investor. It may be a bit technical. So if Ken or Yin, you could answer. So it's on power delivery. So Yin, you mentioned in your presentation that ASE is able to provide like a power suite for voltage regulators as a backside power delivery. So for that voltage regulators, would you be like sourcing from the IC manufacturers? Or would ASE be able to make in-house?
Kenneth Hsiang
我让Yin来回答这个问题吧。Yin,你想尝试回答一下这个问题吗?
Kenneth Hsiang
Why don't I pass that along to Yin. Yin, do you want to take a stab at that question?
Yin Chang
你能再重复一遍问题吗?
Yin Chang
Can you repeat the question one more time?
Sunny Lin
好的。关于你谈到的电源传输中的电源套件,日月光计划提供其中的电压调节器。那么日月光会从其他公司购买调节器吗?还是你们会内部制造?
Sunny Lin
Yes. So for the power delivery that you talked about the power suite, ASE is looking to offer there are regulators in the suite. So will ASE buy regulators from the others? Or would you make it in-house?
Yin Chang
我认为模块本身是我们内部制造的。但对于PMIC(电源管理集成电路),通常是客户指定或客户定制的芯片。所以我想是两者的结合。我们会自己制造模块,但芯片本身通常是客户指定或采购的。
Yin Chang
I think the module itself, we make it in-house. But for the PMIC, it's typically customer specified or customer custom silicon. So it's a combination, I guess. We will make the module ourselves, but the chip itself typically has been signed or bought.
Sunny Lin
好的。当然。没问题。差不多该结束了。Ken,在结束前你有什么想强调的吗?
Sunny Lin
Okay. Sure. No problem. It's about time to wrap up. Ken, anything you want to highlight before we close?
Kenneth Hsiang
我想可能这次演讲中的关键点是——我们讨论了很多关于我们遇到的技术方面的问题。但我想人们应该记住的关键点是,随着单片制造在提供这些最终解决方案方面的能力越来越有限,过去在单个芯片上创造的很多价值现在正分散到多个芯片中,从而使我们在这个特定领域提供更多价值。因此我们对通过这种技术传播将呈现给日月光的各种机会感到兴奋。
Kenneth Hsiang
I think probably the key point here in terms of -- in this presentation, I think we've talked a lot about the technical aspects of what we encounter. But I think the key point that people should remember is that as monolithic manufacturing becomes less and less capable in terms of providing these end solutions, a lot of the value of what used to be created on a single die are now spreading out into multiple die, thus having us provide more value in this particular space. So we are excited about the various opportunities that will be presented to ASE via this type of technology propagation.
Sunny Lin
听起来不错。非常感谢。期待2026年。好的。
Sunny Lin
Sounds good. Thank you very much. Looking forward to 2026. All right.
Kenneth Hsiang
好的。谢谢。非常感谢。
Kenneth Hsiang
All right. Thank you. Thank you very much.
Yin Chang
谢谢。
Yin Chang
Thank you.
Sunny Lin
再见。
Sunny Lin
Bye-bye.
Kenneth Hsiang
好的。再见。
Kenneth Hsiang
All right. Bye-bye.