I really don’t know clouds at all. – Joni Mitchell
The semiconductor industry is finally on the cusp of joining the cloud revolution. The cloud has offered the promise of greatly expanded resources for years, but adoption has been slow due to lingering concerns. The biggest contributing factor for the concern over moving from on-premise EDA servers to cloud-based servers is, surprisingly, the rise of third-party IP. In the old days, if you were developing 100 percent of your own IP, and if you put that IP on a public cloud, and it somehow leaked out, well shame on you. That would certainly be bad for business. It might hurt your reputation a bit. But these days, with so much third-party IP being embedded into chips, if that third-party IP leaks out, that’s a lawsuit-fest in the making.
Consequently, semiconductor companies now have even more incentive to protect IP with advanced security. Surprisingly, cloud-based security is far, far better than on-premise security. Why? Because keeping customers’ data secure is the central mission of cloud service suppliers, so they’ve developed a rich set of security tools to protect the data that’s entrusted to them by their clients. In many ways, you can maintain much better security in the cloud than you can with on-premise tools.
Image credit: Markus Spiske temporausch.com from Pexels
Amazon Web Services: Exemplifying the benefits of cloud computing
Take Amazon Web Services (AWS) as an example. (Note: AWS is not the only vendor in the cloud space, but it’s one I’m very familiar with.)
AWS has developed the concept of security groups – firewalls that you throw up around any network interface to allow only specific traffic into that secured network. You can do that for just one server or for a fleet of servers, in just seconds. Most on-premise server networks won’t let you work that quickly, or as easily, or with such fine control because most such networks lack the security tools to do this.
In addition, AWS allows you to encrypt every bit of data stored on and flowing through its cloud-based storage systems. You can encrypt data at rest in on-premise storage but it’s a lot harder to encrypt data flying through the on-premise network. Amazon’s Elastic File System (EFS), a managed NFS file service, offers the ability to easily encrypt NFS traffic on the wire, a difficult feat at best with an on-premise solution.
AWS built-in encryption key-management service can rotate encryption keys automatically. The cloud also allows you to have key policies that are easy to implement and maintain.
Internal corporate networks rely heavily on perimeter firewalls for security. Perimeter defense just cannot deliver sufficient security against determined hackers and everyone realizes this. We’ve built big, open, on-premise networks that are just not well-suited to implementing adequate security protocols. Trying to retrofit these network architectures with additional security is time-consuming and costly, and it hurts engineering productivity. Moving to the cloud gives you a greenfield opportunity to right some of the wrongs of the past.
Continuing with AWS as an example, here are some additional advantages of EDA in the cloud:
- AWS provides physical security that’s far above and beyond on-premise security. It doesn’t publish the physical locations of its data centers. It also has professional security staff 24/7, keycard access, and additional security features that far exceed typical on-premise physical security.
- AWS automatically manages security patches and access controls for their managed services such as database services.
- AWS gives you plenty of security tools to automate security processes, audits, and so forth to protect your data.
AWS gives you so much flexibility that you can get yourself in trouble in you are not careful. If you want, you can create the same sorts of security holes that already exist with on-premise networks. You shouldn’t of course, but you can if you’re not thoughtful about things. You just need to hire the right people to implement and maintain your cloud security.
Here are five very big differences between AWS (cloud-based) and on-premise server networking:
- Elasticity: Cloud-based systems enable you to scale up in minutes. That ability has pluses and minuses depending on how disciplined you are. On the plus side, you can quickly grow your EDA infrastructure as big as you want and then shrink it back down when you no longer need the additional capacity. All you need to do is tell the cloud service that you need more capacity and it will bring that extra capacity online for you in minutes – and will charge you for it. (That’s the minus side.) When you’re done, you can turn off the extra capacity (and stop paying for it) with the same speed. If you want to provision more EDA capacity for your on-premise network, you’ll need to beg, borrow, or steal existing capacity from someone else on your network, or you can order more servers, get the vendor to build and ship them, install them in your server room, provision them, and bring them online. That will take months.
- Fault tolerance: On-premise networks rely on large, monolithic service architectures, which saddle EDA vendors with more than 30 years of technical debt. The cloud operates on a different model, one that’s based on containers and microservices. This is inherently a redundant, fault-tolerant computing model if you write your code correctly. The difference between redundancy in the cloud and in on-premise networks is night and day. There’s no comparison. No private networks can match the available and growing redundancy of cloud systems, which have redundant servers inside of a data center and redundant data centers in multiple, worldwide geographic locations, which protects your data from natural and man-made disasters.
- Network segmentation: Many semiconductor developers have several design centers distributed around the world and there may be IP in use on a project that cannot be shared with certain geographic locations either by law or by contract. Cloud networks are already set up with automated tools for network segmentation that can enforce geography-specific rules through VPCs (Virtual Private Clouds), which are easy to set up. VPCs allow you to set up subnets with restrictions based on routing tables so that IP management and control become highly automated.
- Removal of single points of failure: The typical EDA grid configuration has several built-in single points of failure. For example, a central job dispatcher generally runs on one single node. If that node dies, all EDA work halts. The same is true for EDA license servers and for configuration-management and version-control servers. Again, because cloud networks are based on the microservices concept, the cloud simply doesn’t need to have the same single-point-of-failure vulnerabilities that on-premise networks have.
To get these same advantages with on-premise networks, the grid architecture must fundamentally be changed, starting with the replacement of NFS. EDA systems need to replace huge, monolithic file systems specifically developed for EDA with object storage. That's a tall order – one that requires the rewriting of fundamental assumptions that serve as EDA software’s foundation.
In the 1980s, 1990s, and early 2000s, small EDA startups appeared to fill gaps in the offerings of the large EDA players. If they succeeded and grew, they’d eventually be gobbled up by a larger EDA vendor. That flowering of EDA startups seems to have damped down. The market has really matured.
Next wave of EDA startups to offer cloud-first tools
Going forward, I expect the next wave of EDA startups will be offering cloud-first tools that are not burdened by three decades of technical debt. They’ll be able to architect their tools specifically for the cloud.
We’re starting to see this happen. For example, Metrics, a Canadian EDA startup, offers a pay-by-the-minute, cloud-based simulator and verification manager. Although one job on one cloud server might run slower than a monolithic simulator running an on-premise server, Metrics has architected its tools so that you can throw more servers at the problem, allowing you to run all of your jobs at once. Here, multiple simulation jobs running concurrently on multiple servers will ultimately finish faster than running the jobs serially on one slightly faster on-premise simulator.
That’s the kind of innovation that we’re going to see. That’s the future of EDA.
Derek Magill is executive director and president at HPC Pros. Derek has 20 years of experience supporting semiconductor engineering functions. His main focus has been in system architecture and technical management, but over the years he has been involved with technologies such as EDA licensing, ClearCase, HPC architecture, IP management and engineering software support. Derek spent 15 years at Texas Instruments in various technical and managerial roles. He is currently a senior manager, IT at Qualcomm managing the Global License Infrastructure team as well as the lead technical architect for the company's engineering cloud activities.
The Electronic System Design (ESD) Alliance, a SEMI Strategic Association Partner, is the central voice to communicate and promote the value of the semiconductor design ecosystem as a vital component of the global electronics industry. As an international association of companies providing goods and services throughout the semiconductor design ecosystem, it provides a forum to address technical, marketing, economic and legislative issues affecting the entire industry. The ESD Alliance also stages events that promote networking, learning and collaboration among member companies. To learn more about the ESD Alliance and how to join the group, visit www.esd-alliance.org or contact Bob Smith at email@example.com.