Focus on the Customer and Discover Things That Excite and Surprise You


Molex LLC is a company dedicated to providing solutions for every electrical mechanical optical interconnect device our clients need. We help our customers evaluate where they are and where they need to be. Then, we give them the building blocks and the pieces that they need to connect those two ideas together. We provide the components and the solutions they need to develop a finished working product that is everything they originally envisioned, and, in many cases, that is even better than they ever thought they could achieve.

There are a lot of companies out there that do what we do, but at Molex we believe the major quality that separates us from those competitors is our tireless focus on the customer's needs. It's not about us, it's about them. They have their own customers that they've dedicated themselves to serving. We want to help empower them to do precisely that in the fastest, most efficient, and most cost-effective ways that we can.

What separates great companies from good companies is a tireless focus on the customer's needs.

It was this focus on the customer that first led me to investigate certain gaps that exist in industry specifications that were ultimately causing a lot more harm than good.

It All Starts with a Simple Question

My official title at Molex is Signal Integrity Engineering Manager, which means that I do a lot of work with the modeling, design, and validation of signal integrity and power integrity (SIPI) for certain products. One of the major gaps that we identified in terms of industry specifications had to do with differential insertion loss limits present in certain types of cables we were working with. We recognized that channel operating margin (COM) limits didn't always agree with each other; they weren't consistent, which made it difficult for customers to achieve their unique goals.

The problem with this is that we can make cables that should work in a system, which means customers should be able to use them in whatever way they need. But because of arbitrary numbers and limits in the spec that are too tight, we can't say that they're "fully compliant." Despite this, most of our customers would probably agree that these cables work perfectly fine in the system. But a lot of times when you get into the qualification process, customers want to do everything by the letter, and they're not willing to make that judgment call on their own. So, they say, "Here's what's in the spec, and that's what we need to follow."

In this scenario, to meet those requirements we have to go to lower loss, meaning bigger and more expensive cables that we also can't make nearly as long. If the customer needs a specific length to connect to their system to do the architecture that they need, they have to go with a more expensive overall solution.

In a way, it's a bit of a cost issue. For us, it's frustrating because there's this group of cables that we should be able to sell to our customers that they can successfully use, but we can't. But, we're not necessarily in a position to change the status quo, either. It's also an issue that will only get worse over time as we go from 56 Gbps to 112 Gbps with the next version of the spec. It's part of the reason why it's so important for all of us to keep our eyes on this when we're working on the specifications.

At this point we had identified an issue in the marketplace and knew that we could potentially help address it. Based on that, the next step involved testing to see what, precisely, needed to be done.

You Know the What. But Do You Know the Why?

For our analysis, we used 10 Quad Small Form Factor Pluggable Double Density (QSFP-DD 28) AWG cable assemblies of a common construction from 0.5 m to 3.0 m 28 AWG in length. The range was important, because normally you would build shorter cables a little differently than longer ones. But because we didn't want cable gauge to be a variable in our study, we wanted everything to have the same 28 gauge with identical 28 gauge Twinax cable construction.

These particular lengths were selected to best align with and bracket a number of practical use cases. These included low loss with higher reflections and higher noise, high loss meeting the requirements in IEEE 802.3, and high loss exceeding the requirements in IEEE 802.3. The thought was that by manipulating only insertion loss, we would produce a distribution that would best represent real life variation. All cable assemblies were built on the same manufacturing line with the same bill of materials so that aside from part to part variation, length and, by association, insertion loss, were the only independent variables.

Whatever product we were going to use for the testing needed to be something that was highly relevant to the market that people were interested in. QSFP-DD is an industry standard, yes, but it's also one of the newer industry standards. It has some of the highest port density available. At 50 Gbps you're looking at about 400 GB per port. Everyone is interested in getting these cables as long as they can and for the lowest price point, so based on all that we thought QSFP-DD was the perfect product for our purposes.

Overall, we knew we were trying to control for length to control the loss of the cable. We were working with the shortest lengths we typically sell to customers, all the way up to the longest lengths. Then, we went just a little bit past that maximum length to show what happens in that situation with this particular construction.

Our analysis was mainly performed on touchstone files measured on a 4-port Vector Network Analyzer (VNA) with a twelve-port extension. This enabled the most accurate measurements possible of s16p format Touchstone files. Sixteen different measurement configurations were ultimately needed to fully characterize the performance of these cable assemblies. Eight of those configurations captured the thru paths and far-end crosstalk (FEXT), and the remaining eight captured near-end crosstalk (NEXT).

The team working on this testing involved some of my colleagues within our signal integrity department at Molex, along with some of our friends from Keysight Technologies who were able to bring their expertise along with their hardware and software tools. Keysight is great because they're always willing to participate in these sorts of studies. The Keysight mentality of "this is something we should participate in because we are interested in it" is always exciting.

We also use a lot of Keysight test equipment in our own lab and we have a great deal of familiarity with their engineering teams, so overall they were a good fit. They're always the first people willing to work collaboratively with us, whether it's something like this or for papers at DesignCon or even if we just have an issue in our lab and need some extra help with something.

No Stone Left Unturned

Keysight's role in this turned out to be critical because they were able to provide a second set of measurements using different equipment by an entirely different team of operators. For the record, they used a 32 port M9375A 26.5 GHz PXI based VNA. They collected two s32p files per cable and provided all measurements for each direction, but that didn't include all crosstalk data between sets, so they also used their FlexDCA software. Keysight is leading the way to connect design and test equipment and software with their new PathWave software platform. 

Keysight is leading the way to connect design and test equipment and software with their new PathWave software platform.

After testing was completed, we went through all of our round robin measurements and dove into the data we had collected. We had the output of the IEEE COM tool and also the output of Keysight's eye diagram simulations from the Advanced Design System (ADS). We pulled everything into a statistical analysis software package called MiniTab and then started doing best fit regressions on the data.

We looked for correlation between the various S-parameters like insertion loss, return loss, crosstalk, mood conversion, and the outputs of the COM tool and the simulated eye diagrams. We saw fairly strong correlation between insertion loss and COM. This led to one of our primary findings: as the insertion loss increases, the channel operating margin decreases.

This made a great deal of sense to us. We created a chart that showed insertion loss versus COM. The 0.5 m cable, for example, showed marginal performance with respect to the return loss limit in IEEE 802.3. All other parameters had a significant margin to their limits and exhibited well controlled behavior on a pair-to-pair basis. The 3.0 m cable was selected as a corner case because it failed the insertion loss limit in IEEE 802.3 and had several pairs with crosstalk resonances.

In the end, correlating cross talk measurements was a good way to show whether the non-measured ports were terminated in loads when making multiple measurements and putting the combined files together.

Overall, our testing confirmed that there is definitely a gap in the specification methods currently being used by customers. You might say that there is a fundamental flaw in the traditional approach to frequency domain limits and analysis. The insertion loss limit is significantly more constraining than COM requirements in channels where there is sufficiently low noise. The construction of the channels in our data set lent itself to low noise and an improved signal-to-noise ratio. COM comprehends this trade off and because of that suggests the possibility of very desirable implementations like reduced cost solutions, smaller wire diameter cables, and longer cable assemblies.

Moving Forward, or: "Who Watches the Watchmen?"

Overall, the aspect of this study that I thought was really interesting—and one of the main things that the Keysight team brought to the table—was being able to dive deep into the workings of COM. From my perspective, it shows that using COM at the very least makes sense because we're able to replicate its results in a totally different tool. Not only did I think that was extraordinarily helpful, but it also helps to show anyone that if they have enough time and if the need is there, you can reverse engineer and truly understand what the COM tool is doing.

If you have the time and the need, you can reverse engineer and truly understand what the COM tool is doing.

It's an open document and the code is available for anyone to take a look at, but I think some people find it a tad overwhelming to try to tear this thing apart and see exactly how it works. With Keysight, we just proved it's a lot easier than you think.

To build on our learnings, we're already talking about expanding the study to examine a few things we didn't get to the first time around. We'd like to take the measured data of the cables, for example. We'd also like to collect simulated eye diagrams and go a step further and get some system level measurements like mid-air rates. Right now, we've correlated the S-parameters, insertion loss, and other factors to COM but we haven't yet correlated COM to the system performance.

We know there's a gap between the S-parameters and COM. But is that because the S-parameters are too strict or is because COM is too optimistic? I think there's industry wide trust in COM, but I don't know that it's been publicly validated, at least for this particular aspect for the cables. But now, we're in a position to perform that validation. I couldn't be more excited about working with Keysight and what we are able to do for our customers.