At Mobile World Congress 2025, I had the chance to chat with industry analyst Abe Nejad to discuss the next phase of telecom engineering. We talked openly about the real-world challenges that come with multi-vendor environments and what mobile operators need to prepare for.
At Verizon, ECG's Sherwin Crown joined the OneTalk development team in the role of system integrator. Drawing from ECG’s two decades of projects like that, supporting operators like AT&T, VMicrosoft, and Liberty, our conversation turned into a practical checklist for integrators of voice platforms, along with Open RAN and 5G Integration promoters. And these lessons generalize to other industries where a single tech vendors helps to pave the way, and then other vendors come in to connect with that main player.
We've Seen The Multi-Vendor Challenges Before -- In VoIP Network Design
Open RAN is not the first time telecom has had to figure out multi-vendor interoperability. As I told Abe, we saw this play out 20+ years ago in the VoIP and IMS world. Back then, big vendors like Nortel and Lucent rolled out vertically integrated, single-vendor deployments. Sound familiar?
Radio Access Networks (RANs) are a hot integration area. While there's a lot of work on 5G cores and radios, there's a lot of buzz around full openness in Open RAN. Some are discouraged that Open RAN hasn’t fully “opened” yet. But honestly, that’s normal at this stage. What matters now is that operators push vendors—Nokia, Ericsson, Parallel Wireless, whoever—to do real interoperability testing. Standards may exist, but implementation options create ambiguity. As we learned with VoIP: if the operator (the Service Provider) doesn’t demand it, true interoperability won’t happen.
Visibility Is Non-Negotiable (You need good logs!)
Diagnostics and visibility matter more in multi-vendor environments than ever before. It’s not enough to rely on logs or status codes. You need full packet captures. You need to see what the software is actually doing. Legacy hardware-focused platforms didn't usually log the internal logic.
And let’s be clear: vendors often hesitate here. There’s a CPU tradeoff—instrumentation takes cycles. But CPU is cheaper every year. If you’re building software without diagnostics because of CPU cost, you’re building with blinders on. Operators need to demand introspection points—tools that let them see inside the decision-making. Logs should let you see the logic happening inside the platform.
Many Available x86 Servers: Designing Automation Tools for Variety
Operators love general-purpose hardware—Intel, AMD, ARM. But I warned Abe: hardware diversity brings real operational complexity.
What happens when Dell stops selling the server you standardized on? Or when a new feature in a Lenovo BIOS would save you hundreds of hours—but doesn’t exist in 80% of your network? These small variations in management interfaces, NIC boot behavior, or firmware upgrade tools add up fast when you’ve got 10,000 remote sites.
Start planning now for server heterogeneity. Expect it. Test for it. Build your automation around it -- i.e., assume you'll have a variety of server types and you need to support them all. Diversity will help you avoid single-vendor cybersecurity risks and probably sooner than you might imagine.
Lab Testing Is No Longer Optional
With single-vendor solutions, you could rely on their QA lab to catch the bugs. That era is over when you want to build "best of breed" systems that integrate components from many vendors.
Every network operator is now unique. You’re integrating multiple vendors, each responsible only up to their interface boundary. The “seam” is your problem. And those seams—where CU meets DU, or DU meets RU—must be tested, again and again. For voice, your interface between application server and SS7 gateway, or between CALEA source and CALEA sink, are your testing points.
The cloud world has shown us what’s coming: weekly patch cycles, CVE-mandated upgrades, system diversity. As I said during the interview, “software culture means you're testing constantly.” If you don’t have a lab, you’re betting your network on someone else’s assumptions.
Deep Troubleshooting Requires Real Training
Operators must own their architectures. Not just accept a vendor’s preferred configuration. Not just assume their network is “certified.” You need to ask: does this option limit our growth? Does it introduce a security risk? You can’t make those calls without training.
That means hands-on labs, protocol analysis, familiarity with standards, and the ability to call out problems in vendor behavior. This isn’t just good hygiene—it’s what makes deployment windows possible. If you can’t troubleshoot in-house, you can’t deploy on time.
In a Multi-Vendor World, You Are the QA Team
Smaller vendors move fast—and often expect the operator to be the QA lab. That’s not just a burden; it’s a chance to influence the roadmap.
As I told Abe, one of the heads of Verizon Labs told me: “Software quality has taken a hit. Now they expect us to test it.” That’s the reality. You will be testing software that’s “ready for testing,” not “ready for deployment.” Know what you're signing up for, and structure your team accordingly.
Standards Are a Good Start—but Only a Start for Network Design
We all love standards—until we hit the optional parts. One spec gives you three ways to transfer DTMF. Each vendor picks a different one. Multiply that across every interface, and integration becomes a maze.
It took 10–15 years in IMS before industry norms emerged. Open RAN is in its early innings. Until the ecosystem settles, operators must lead the way in narrowing options and enforcing consistency.
The DevOps Mindset Is Coming for You
Hardware culture aims for perfection at launch. Software culture ships fast, patches often, and learns from what breaks.
DevOps in telecom means testing daily, patching weekly, certifying updates continually. Gone are the days of quarterly maintenance windows. If you’re deploying Open RAN, your mindset must shift to “ship and adapt,” not “certify and forget.”
Zero Trust Isn’t a Checkbox—It’s an Urgency
Cybersecurity in both Voice and RAN needs a new level of seriousness. Zero Trust isn’t just an idea—it’s a way of seeing the world: assume malware is everywhere. Every phone. Every test device. Every tool someone plugs into your Ethernet switch.
The solution? As I put it in the interview: “Get your software onto the public internet and test it there.” That’s where you’ll learn what real attackers do. That’s where your defenses will mature. Anything less is a lab fantasy.
The challenge is that new systems aren't targeted early. So, for example, there's not a lot of cyberthreat around attacking Private 5G baseband just yet -- but you can depend on it that it will come. Systems only become interesting targets as they proliferate. So design your network assuming it's going to be attacked as soon as it can be.
This conversation with Abe Nejad was a valuable moment to reflect on the real demands facing Open RAN adopters. The industry is moving fast, but we’ve seen these patterns before. If operators are prepared—with visibility, training, and the right mindset—they can thrive in the multi-vendor future.
But these lessons hold true in any technology area where customers are moving from a single vendor to embracing multi-vendor integration. If you look at the Siemens hold over industrial manufacturing technology, you've got another area where a single vendor is being forced to integrated with many new vendors.
Watch the full interview (above) to hear more from the conversation, or read the transcript below. And if your team is facing these challenges, ECG is ready to help.
Full Transcript
Abe Nejad: For over two decades, ECG has been delivering technical expertise and staffing solutions to leading operators in the ICT industry from contributing to the development of Verizon OneTalk service to engineering products for Inmarsat. ECGs impact spans across major global operators. Today, we speak with Mark Lindsey. He's a principal at ECG here to discuss their role in the industry's transformation and the future of telecom engineering. And Mark, welcome. Well, thanks for being here. Big Show here in Barcelona. How's it been for you?
Mark Lindsey: So far, it's been really good. Lots of good meetings, lots of good contacts, great.
Abe Nejad: I have a number of questions to ask you. I want to start with: how can lessons from Voice over IP inform really, the multi vendor interoperability challenges that exist in Open RAN?
Mark Lindsey: Yeah, I think the voice over IP and the IMS development from about 25 years ago, really serves an interesting lesson for Open RAN developers. Now. We started with really single source or single vendor. Open ran type of deployments that we're seeing now. And then we saw the same kind of thing in the VOIP and IMS world early on, where we saw the big players like Nortel and Lucent really rolling out some of that initial deployment using all their own equipment. But I think that's natural. Some folks have been really a little startled by that. They said, well, open RAN is not really achieving the success that we were hoping it would achieve. And I think that's a little premature to say that it's always natural when you're bringing out a new standard, to see that the vendors are going to implement it on their own, in their own way. So I think that one's one big area. Not to be too startled, it's pretty normal, but it's part of the evolution process. And what we need to see is that the network operators are going to need to push back on those sources, whether it's Nokia and Ericsson or Parallel Wireless, actually to do that testing with other vendors. And so you'll see that really, we see that a lot of the show where vendors are starting to work with each other, but it's really has to be driven by that customer. I think another big area that's of overlap is the way standards can have options. And so where is an Open RAN? There's a lot of good work being done. Standards are being finalized right now, but there are always options for whether to do it this way or that way, and that can lead to some complexities. You've got to decide what they are. So one of those options, for example, in the VOIP and IMS world, was just, how do we transfer touch toned or DTMF digits through? And there were a half a dozen potential ways that were discussed, and it was because the major vendors had kind of early chosen one or the other. So it's important for the network operators to as soon as they can hone in on a de facto standard. Even though the standard is going to allow the multiple options, it's important for them to choose one that's going to be used more broadly. And then cyber security is kind of another big area where the we see that the networks that were built, they had a kind of natural isolation and voysent Telecom Networks early on, and then as the newer technology, whether it's based on cloud computing or just IP networking, that is introducing new cybersecurity challenges that the industry is fairly new to. There's a good game being discussed about zero-trust networking right now, which is great. So it's exactly the right idea, but I don't quite think it's actually being realized yet. And that's what I'm that's what I'm telling my clients, is you've got to really understand what zero trust networking is. It doesn't just mean some some IP access control list. It really means, in practice, exposing your applications to malware and attackers. And so that's going to change things. So there are some lessons where maturity is going to be required. And I think we're seeing a lot of the same kind of patterns that we've seen in other multi-vendor telecom concepts through VOIP and IMs, really, we're seeing those come back out in Open RAN and development.
Abe Nejad: Interesting Mark, I want to move on to some of the key benefits of implementing end-to-end visibility and packet capture capabilities that are really useful for troubleshooting in Open RAN.
Mark Lindsey: Yeah, it's important to have the ability to see what's happening in these protocols. I think this is commonly not appreciated until you get into the deep multi-vendor interop and you've got deadlines, and your network operators are going to wonder, why can't we see what's happening inside my software? There's a trade off challenge between how much detail do these software based systems give and then how much CPU power is required to do but, but I'm a mistake I saw being made by some of the big vendors. For example, the equipment, some of the equipment that sold in the Session Border Controller space, the security space, was that they kind of assumed that CPU was going to be about as expensive in 10 years as it was initially. Well, of course, it's not. It's going to be far less expensive a few years from now. The CPU is always getting less expensive. A CPU is required to provide debugging and diagnostic data. So when the vendors are just, you know, working on, just being efficient enough to support, you know. Massive MIMO through software, for example, that takes a lot of care and a lot of efficiency in the code, but it tends to drive the developers to do less diagnostic provide less diagnostic data, and that's something the network operators need to push back against and say, Hey, we need an option to see what's happening in your decision making internally. So there's a trend in this industry, but it also happened in VOIP and IMS to see that the messaging, the official messaging, you know, whether diameter or ECP, the messaging log, is kind of all that's necessary. But in reality, you're going to need packet capture and you're going to need logging capability. It is helpful that in this industry, like in VoIP in IMS, there's a de facto human language, English, because even the vendors, the vendors need to be able to provide diagnostic data in a language that everyone's going to be able to understand. And then there's also the ability for visibility through packet capture, which is, which is going to be necessary in every case. However, it's not something that comes naturally if you're migrating from a software-based system to a hardware-based system.
Abe Nejad: Let's talk more about log sharing. How can vendors improve log sharing and really diagnostic data that to ensure faster root cause analysis in a multi vendor RAN environment?
Mark Lindsey: Well, I think that being able to generate logs about the decisions that are being made internally, about how the DU and the CU are interacting, how the DU is interacting with the RU as it boots up, and those other elements they need to make diagnostic data available to the network operator to be able to help them understand why it's deciding to do what it's doing, and the rationale for the decision making, not simply that It received a message or that it's transmitting a message, but what was some of the logic for making that decision? And that's that doesn't come natural, especially from the hardware sort of mindset. The hardware culture, you don't necessarily explain the machinations when you're making a decision. You know, through electronics, you can't, it's not really part of it. But in the software culture, software developers can do that. They're conventionally, they will do that. And those things have become deeply ingrained in the entire integration and interoperability process. So now it's become a standard expectation. And for example, the VOIP and IMS world rice initially, back in 2003 it was not. I think we're seeing kind of the same sort of pattern repeated with Open RAN, where it's not necessarily the expectation that you're getting at all the internal rationale. Still, it it really needs to become more of the normal. So being able to generate logs so that, if I give you some new software and you deploy it in your network, that I can reasonably ask you for some logs to tell me, Well, show me the logs when we connected that new radio, you know, and so I can troubleshoot in that way. And that needs to become more of a normal Yeah.
Abe Nejad: So what are the biggest challenges mark when operators face when really managing these x86 based server infrastructures that from multiple vendors?
Mark Lindsey: Yeah, it's really interesting. There's a lot of excitement about being able to use standard Intel or AMD or maybe ARM architectures for that, and that's great. I think there's a lot of efficiency that comes through those things. However, what you're going to see, if you're a big Canadian vendor rolling out a large Open RAN deployment, is likely to have a lot of uniformity, which is great. Every engineer wants uniformity, simple code that's going to be able to manage the configuration, they're going to be able to replace servers at cell sites. Those kind of things are great, except for at some point, they're going to stop making the server that you were using and they're going to start making a new one. The new one's going to have a new feature. And you love this new feature, but it's not available on 80% of your network, so you're going to have to make that decision about, how do I add support for that new functionality? That really will save us a ton of time. It'll help us doing replacements. It'll help us doing maintenance so much quicker, and so you start to have variety there on the early end. You're not necessarily expecting that. So I think what is the sooner the network operators can start to plan around heterogeneity where they have multiple server vendors. There's some interest in Server vendor diversity today, but it's not really entrenched yet. I don't, I don't I don't say, usually there's a lot of standardization on, you know, whether you're getting your servers from Dell or Lenovo or from HP Enterprise, you know, those are, those are all great options, yeah, but they're, they're different. There's going to be little differences in how your cell site turn up, process, how they the operations work, how the back-end process works. And so understanding that you are going to have differences even between network cards. The way the network cards boot up and attach to the network is going to be is going to be different, and that changes things. When you're trying to run 10,000 remote sites and never do a truck roll to go visit them, that's really going to change things. So the sooner they start testing that variety and building it into their provisioning and their automation logic to be able to activate sites, and then knowing that they're going to have some variety in those servers, the better. The sooner a network operator is prepared for server diversity, the better.
Abe Nejad: So Mark, why is maintaining independent lab environments really critical for Open RAN operators, and how can they overcome these resource constraints?
Mark Lindsey: Yeah, it's really interesting when you can depend on your single source vendor to do a lot of your interop integration testing for you. So you basically are able to get it all from one source. So, for example, rewind 20 years. You could get it all from Lucent. You could, you could go and they would, they would have pre-tested everything for you, and that was great. They did a lot of great work for you. You could get a lot, and there's vendors who will do that. Now, the pre-test, everything's pre-integrated, that's great, and that reduces the network operators' need to do a lot of testing themselves. They largely just depend on what the vendor says they if something doesn't work, it's basically up to the vendor to figure out why, because it was something they had effectively certified. But with a multi vendor environment, we have to remember, it's a completely different story. Now they're only there's a demarc point, there's a certain software stack, there's certain network interfaces that they're implementing to, and each vendor is really only responsible up to that edge. And so it means that you've got to be ready to actually test and be responsible for the edge of the little the little place where the two interfaces touch. That's actually your responsibility as the network operator. You can't depend on Rakuten, Parallel Wireless, or Samsung to talk about the edges. They're only responsible for their particular interfaces and maybe where their components touch each other, but generally speaking, the network operator has to take responsibility for that, and so being able to test those things in the lab is really important. Being able to say, Oh, we've got a new software update, which is going to happen far more frequently than someone from the hardware culture is going to expect. Sometimes we've got a new software update. It's really essential. We have to test this thing. And we have to test it, you know, we have to get this deployed in the next, you know, seven days, because it's a big cybersecurity vulnerability. And the federal government and the US has announced that everyone needs to run the software update in their cell sites. We've got to do this pretty quick. Well, we need to know that it's not going to create regression. It's not going to break things in our network, and so we're going to have to have a lab to do that, because the vendors are not in a position to be able to test your particular decisions. So every network operator, when you have multi-vendor or software-based systems, is unique. They're kind of a unicorn. They're they're special and distinct, and they have to take responsibility for that, testing themselves, but it does take a substantial amount of effort. That's one of the things ECG does, is some of that, that lab testing, to actually help with these, these multi vendor interrupt things, where we put it in the lab, and then we're able to run, run testing on the schedule. I've heard stories from from the cloud space, for example, where the cloud vendor, cloud operator, will have 1000s of servers, and they'll get an update from Broadcom, for example. It's required. It's a good update. And now they've got to go update 1000s of servers. They were expecting to do updates. Oh, we'll do updates every quarter, as it turns out. Now they're doing updates about every week, and that changes things that being able to be ready to do that kind of testing is really necessary in software culture.
Abe Nejad: And how should operators really balance vendor collaboration with a culture of this deep troubleshooting and really internal training as well?
Mark Lindsey: Yeah, that lab testing really is going to push operators toward needing to understand how these functions are working and how they think they should be working. Therefore, exercising some network architectural decision-making is crucial, as it involves thinking through how this protocol was intended to be used. If we do it the way that you're suggesting, vendor may be a good way. But does that open us up to some cybersecurity thing that we need to think through, or is this moving us if we take that option? Is that going to limit our ability to grow the network in a way that we want to later? So being able to exercise some architectural ownership will probably be new to some of the multi vendor network operators who are putting together software from from multiple places. They've not been in that position to do that before, so they're going to have to be pretty much in the weeds to understand how it works. So that's going to mean both training get all the all the training can from the vendors, and then also training on the standards, how these standards are being used, and then a lot of hands on experience. It's really important for the engineers, the technicians, the product managers, to have the resources to look at how their how their system is functioning, not just take the word for it from the different vendors, but to look at what it's doing, in order to be able to move projects along. Be able to move things to deployment. One of the one of the things that's missed sometimes is how just short delays can lead to missing a deadline or a time window. And then if you're one of these vendors who is offering a solution in the Open RAN space, then you want to be able to accelerate the testing and the proof of concept and the initial deployments. You want to be able to accelerate that so ensuring that your customer knows how to look at things that you know how to that your staff knows how to look at the details of the protocols and the internal operations is going to be critical to actually closing the deal, to actually getting the sale done. And so these things are not just arcane details that are for down the road. They're actually for the upfront implementation and the interop testing. So we've seen a lot of good VoIP software and IMS software, for example, not get selected simply because the vendors were not very efficient at being able to do that deep interop troubleshooting and then exposing that information.
Abe Nejad: So Mark, what role should network operators play in the quality assurance process when working with smaller and really more agile vendors?
Mark Lindsey: Yeah, it's. Really interesting, because, as I mentioned, the quality is somewhat different, since the vendor interoperability responsibilities are different. So if you're getting everything from one vendor, and they're pre integrating, and they're certifying as sort of a package, then in a sense, they've done a lot of that QA testing. For you. I had an interesting conversation with one of the head of Verizon labs. About 10 years ago, we were talking about what's changed in the VoIP air. Because he had worked with their VOIP and IMS network, when it was basically mostly a single vendor or single source, and then he saw a transition over to a multiple vendor VoIP. And he said, Well, Software Quality has really taken a hit, because they expect us to do all the testing, because they can't test on our equipment. So in his case, Verizon had purchased, you know, components from a dozen different vendors, and really needed them all to work. Well, they couldn't possibly test all together. He had to do all that testing. And so really what he was getting was software quality that was ready for testing. So when he received it, it was ready for testing. And then he began, and he launched, his team launched their own testing. So the QA responsibilities changed substantially when you go into a multi vendor, Open RAN type of environment, you, as the network operator, are going to need to test and self certify a lot of those connections, because you can't expect each of the individual operator or the vendors to be able to do that testing for you because they don't have what you have in your lap, every every customer, every network operator is is unique in a new way, so that that means that there's a responsibility for quality assurance that really isn't there under a hardware primarily hardware based culture.
Abe Nejad: So why don't Open RAN standards completely eliminate interoperability challenges? And how can operators really navigate these discrepancies?
Mark Lindsey: Yeah, it's interesting. So one of the key things in standards, and I've studied standards from different from different areas. So for example, in video standards, I did some work in research work. I worked with Lucent on network monitoring standards back in the late 90s, early 2000s and then on the void standards. So one of the interesting things that they have in common is options. They'll have an option, and I referenced this a minute ago when I was talking about DTMF and transfer. The different vendors will have their favorite way of delivering or handling some sort of data. And so when you have multiple options, it creates more complexity. And so the challenge is, how do I deal with all those options? Well, each vendor is only going to be aware even of a few of the options, because it's never their customers have not asked them that to be aware of all 10 potential ways of doing this, or all three ways of doing this. Only two of them, maybe, and so they really only have awareness. Well, then when that third way or that fourth way of doing a particular function, or that third option, that additional capability that's only used in this particular scenario when it actually shows up. The software developers haven't actually built that in. They haven't built in the capability to prepare for it. So options in the standards are actually one of the bane of the implementer and the system integrator's existence, because we have to figure out which options were selected by this one and which options were selected by this one. And it took a good 10 to 15 years in the multi vendor IMS space for these options to start to get really crystallized and to figure out which options were really required. So now, if you're an IMS vendor and you're working with a new cell handset or software, or you're working with a new type of hardware device, it's become a lot clearer, what are the important options, but if you rewind 15 years ago, it just wasn't as clear. I think that's the point we're at with Open RAN right now: it's not quite clear which of the capabilities in the standards are truly optional, or which are technically optional according to the standard. However, everyone will do them because everyone will require them. So knowing about those options, there's a challenge in the software-based protocols of kind of perfectibility they want to keep the option open, to make the standard a little bit better, to add some new enhancement, or for a new vendor to come along with a new idea. And that that sounds good, but from an implementers perspective, it's really challenging. So that options, that's actually one of the things that makes the implementation of a multi vendor standard like the Open RAN standards a little bit harder than it probably is expected to be in some cases.
Abe Nejad: So, Mark, what mindset shifts are required for real operators transitioning from these traditional hardware-based run to a more software-driven, DevOps-oriented model?
Mark Lindsey: Yeah, that's a great question. The hardware culture and the software culture could hardly be more different. The hardware culture is about shipping perfection. We've designed something, we've tested it, and we've checked its RF emission. We understand it well. And so, the hardware culture is really good at making high-precision electronic RF components. They can do that really well. Software culture is about shift early. Let's test it. Ship. When you ship early, you'll we'll test it, and then we'll figure out what's really working. Sure we're going to do patching and update. So patch early, patch often, like, so get it out there in the field. Let's start to get some data on this. So we call it data driven development. Everyone loves data-driven. Except data means that you just learned a lot of things that didn't work. And so it's actually some things you got to prepare for. You got to prepare for that expectation. It kind of connects back to the QA conversation you need to prepare for, whether you're going to be ready to accept something that is partially broken. You're then ready to conduct testing and report back all the bugs and issues that're discovered and need to be remediated. That's on the software culture. You're constantly doing remediation. So remediation is part of what you're doing on hardware. You're kind of expecting it to be largely in pretty good shape, because it's got to be tested, but it's going to take a lot longer to develop and then innovate to the next generation, where software you're getting new generations of software, in effect, every few weeks, in a lot of cases. So that mindset between going to we're patching early, we're patching often, is something that's got to be understood. So for example, if you're planning cell site deployments, and every new software update is going to require several hours of downtime, then that's actually going to be a pretty big hit. If those software updates are coming from your vendor every few minutes or every few hours, it's going to feel like every few minutes. But when it comes on a regular basis, regular basis, you can't afford to do those, those software updates nearly as as readily. The sell side vendors, the network operators, need to be prepared for doing early and frequent patching and updates and then being able to certify test things in their labs on a basically a continuous integration basis. They need to be able to bring these things in, test them, and then get them deployed readily. And that's a that's a big difference from the idea of what we're going to do a big upgrade, and this upgrade is going to happen every quarter, and it's going to be our schedule. The network operators are not always going to be able to decide the schedule that they take the updates.
Abe Nejad: Yeah. So Mark, how can Open RAN operators really enhance cybersecurity strategies to mitigate these risks associated with multi-vendor or interoperability?
Mark Lindsey: Cybersecurity is something that I think the Open RAN operators are going to need to take seriously in a new way, like I've mentioned. The talk of zero trust networking is a good idea. It's the right way to do it. I'm glad to see that those things are being baked into the standard, but I don't think there. It's exactly the way that things are being tested right now. So the the zero trust networking concept is effectively assuming that you've got malware everywhere, so that every device, whether it's your your network operator, PC, or it's a device that you're bringing onto the site, even that you're plugging into the local, you know, the Ethernet the front hall Ethernet switch, that those devices all have malware on them, so you're bringing malware into the network at all times, and that's actually matching reality. So for example, smartphones are one of the primary ways of delivering malware into enterprise networks now, because they're allowed to attach to the internal Wi Fi networks. And so if you think about that malware kind of everywhere problem, then you don't really have an isolated section of the network. So what zero trust networking would mean is that if a device, a testing device, is plugged into your front hall Ethernet switch, that device may have malware too. And I don't think that's the way that folks are directly testing it as much as they should be. The concept that every network and every component should be ready to defend itself really needs to take hold in the Open RAN community. It's a it's a software quality issue, partially. So the sooner, I would say, as a general piece of advice, the sooner the software developers can actually get their software onto the internet, and I mean the public Internet, with all the malware and all the attacks on the internet, and test it there, they're going to start to discover and be going to be able to mature their software in a way that's really impossible to do in an isolated lab with a private Ethernet switch and private IP addresses. It's really going to change things so that zero trust is the right way to think about it. However, they really need to go all the way in implementing and testing with as much exposure as possible. And that'll mean that we're putting your real cell sites, their production cell sites, on the public Internet. What I mean is that you're putting the software in a situation where it can be exposed and you can you can figure out what's happening to it. How are attackers attempting to attack it? Without that kind of testing, you're not going to get the software maturity that you need to truly defend yourself. So we're seeing it. We saw it a lot in the VoIP world. Let's look at the SCADA and the industrial world. We're seeing a lot of work come out from the [American] federal government, CISA, looking at industrial vulnerabilities that are discovered pretty much every week. And it's largely because these are internet protocol based devices that have not been fully matured yet, because they were not designed assuming that malware is everywhere, that attackers are launching attacks from everywhere. I have customers in the telecom world where malware and attacks have been launched inside their trusted networks. A nd that mindset really needs to be adopted by Open RAN developers and operators.