Image

Expect 2012’s Packaging Co-Design Improvements To Continue Into 2013

Oct. 30, 2012
Each year sees co-design software vendors vying with each other to produce the best tools, innovating with the most improvements, gaining customer mind share, and battling it out with competitors to win more seats and grow their user base. This year was no exception when it comes to improvements made in the co-design battlefield.

Getting software with new features excites me almost as much as getting the latest electronic gadget. It makes me feel like a kid at Christmas getting ready to open gifts. New features should make life easier, eliminate repetitive and inefficient steps, and improve the results. We should be more productive, doing more with less, and leveraging improved automation, optimization, and computer hardware.

Each year sees co-design software vendors vying with each other to produce the best tools, innovating with the most improvements, gaining customer mind share, and battling it out with competitors to win more seats and grow their user base. This year was no exception when it comes to improvements made in the co-design battlefield.

The Need For Speed

Speed and accuracy are the two major factors I look for in tool improvements: How much faster is the updated tool at creating and solving the model? Has the accuracy improved or has it degraded? For newly upgraded tools, a historical suite of benchmarks should be run to validate the solution results, including problems with well known and trusted analytical solutions. When discrepancies occur, vendor meetings help hammer out corrective actions to restore the previous accuracy levels. This methodology has become so ingrained that most vendors validate their new solution engines in-house and keep the customer from being a guinea pig.

A great trend some companies have taken in 2012 provides the customer more speed without charging for it. A common practice in the co-design software industry is to charge separate token costs for each CPU core applied to a solution. If you have a computer with eight cores, you need a primary license and seven extra tokens to take full advantage of those cores. This year, a few companies have begun making their tools available with multi-core solution engines at no charge. This is a direction many other software vendors should follow since it makes sense in a multicore desktop environment.

One way for a software vendor to build a co-design suite is to acquire companies with complimentary software tools and then integrate them. For example, a major layout and design tool vendor purchased a strong contender in the electrical analysis space this year, giving the tool vendor a platform to better implement an electrical analyses suite. The vendor has been building up a 3D finite element method (FEM) electromagnetic solver to make it as strong as any in the industry and is working to integrate both signal analysis and power integrity into a single simultaneous flow. That’s progress!

Sometimes the integration of the acquired tools lags, with separate user interfaces being kept for the stable of codes. The best-in-class products meld the offerings into a seamless, streamlined interface that simplifies the learning curve, enables interchangeability of data from one discipline to another, and can perform multi-physics and multi-domain optimization. Great synergies can come from using the best meshing capability of one tool to support another’s strong solution code. While making these improvements, it’s important the tools don’t lose capabilities such as scripting along the way.

Other vendors have been working to improve efficiency. A vendor supplying schematic capture, circuit and system simulators, and a full 3D electromagnetic solver introduced an upgraded database that improved efficiency and with a solver that speeds solution times. The faster the solutions, the more iterations can be run, resulting in a better-optimized solution. Many vendors are working to improve solver matrix manipulation, allowing solutions of more complex structures in shorter times via parallelization. The best vendors are partnering with the best universities to leverage the latest matrix manipulation and memory management techniques to turn these efficiencies into realities.

Into The Cloud

Cloud computing provides new avenues to leverage supercomputer performance from desktop workstations. Using cloud computing, the computational heavy lifting such as massive matrix manipulation is uploaded onto shared compute resources rented from cloud computer suppliers (see the figure). A few companies have set up shop offering to solve models using multiple vendors’ tools, including molecular dynamics, computational fluid dynamics, and thermomechanical analysis, on their cloud compute resources. Users interact through a remote desktop interface to pre-process the model and can either download the results to their local machine or post-process over the Internet.

Cloud computing allows a simple laptop or desktop interface with a server farm that leverages supercomputing computational resources, sharing this capital-intensive infrastructure across multiple users and reducing costs.

When models aren’t automatically archived with the geometries, material properties, and results into a corporate residing database, data is lost, modeling repeats are required, and engineers within the enterprise end up with creatively different “individuality” in their model creation, almost guaranteeing subtle and not so subtle correlation issues. Several vendors are building in databasing tools that allow for easy model archiving, searching, and retrieval.

One major electronics thermal simulation tool has developed an add-on that stores both the original model with all its boundary conditions as well as an output summary. These models are tied to an automatic mesh generator that greatly reduces model generation times, leveraging the repetitive and symmetrical characteristics of many IC package geometries to automatically generate die, die pads, leads, solder balls, thermal vias, and other features needed for thermal analysis. Many of the input geometries can be parameterized and run automatically.

For example, a comparison of package thermal performance versus chip size can be automatically generated by specifying the minimum and maximum die dimensions and the number of steps to take between. All industry standard JEDEC parameters can be automatically calculated or individually selected. The solutions are run in the “cloud.” Each company’s specific database is searchable and archives the input parameters for years to come. Data retrieval to answer customer questions and internal queries is quick and less error prone.

On The Board

Tool software engineers will continue to stay busy in 2013 based on the many gaps that need to be filled. For IC package thermomechanical analysis, meshing time must be reduced. Meshing 3D geometries extruded from 2D design databases of the die, substrate, and printed-circuit board (PCB) layouts takes too long. For example, building a mesh for a model to evaluate the impact of a PCB feature such as a through via on the dielectric layers of the die can be a nightmare!

Automated meshing algorithms need to produce better meshes with far less user intervention for all co-design domains. Built-in intelligence should drive codes to both mesh the structures properly for the specific analysis type and prompt the user for the data required to get the optimal model results. Thermal modeling tools should be able to solve a problem from the transistor level through the system level, automatically forming the right hierarchy of sub-modeling, and then be able to transfer those outputs for use by the thermomechanical and electrical tools.

The electrical tools need to become more thermally aware, especially for critical temperature-sensitive circuit analyses. They also need to become more thermomechanically aware to incorporate stress-induced parametric shifts into their sensitivity analyses. These capabilities need to tie together seamlessly with design input databases, materials databases, and archives of results and provide a corporate-wide reporting and retrieval system.

This is the level of functionality that one dreams about like a kid in a toy store. Will we see this in 2013? Let’s hope.

About the Author

Darvin Edwards

Darvin Edwards, TI Fellow, manages the SC Packaging modeling team at Texas Instruments. His team is responsible for electrical, thermal, and thermomechanical analysis of new products and package developments. He received his BS in physics from Arizona State University and holds 20 patents. He also has authored or co-authored over 45 papers, articles, and book chapters and has lectured on thermal challenges, modeling, reliability, electrostatic discharge (ESD), and 3D packaging

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!