Addressing the Data Challenges in EDA - A Closer Look at Software-Defined Storage

LiranZvibel, Co-Founder, CTO, WekaIO
221
351
75
LiranZvibel, Co-Founder, CTO, WekaIO

LiranZvibel, Co-Founder, CTO, WekaIO

Finding Ways to Improve Productivity in Unlikely Places

Today, many industries face time-to-market challenges. The semiconductor design industry is no different. More formally known as the Electronic Design Automation (EDA) industry, design houses are under tremendous pressure to meet the continuous growth in complexity of the chips that power modern electronic devices. That complexity, fueled by consumer demand for more features and performance, has made design simulation and verification even more critical to successful first-pass chip tape-outs, placing huge pressures on engineering and IT to keep chips on schedule and within budget.

The challenge is that more complex simulations need to be completed in less time, and legacy external storage may be the reason for lengthy chip design cycles—the applications are starved of data. The conventional wisdom of IT is to add more compute resources and EDA tools to solve the problem, but purchasing expensive equipment and tools may not reduce the design verification process because it does not address a primary problem, storage bottlenecks. Compute bottlenecks occur at the network attached storage (NAS) filer, leaving applications starved of data and design teams unproductive. Legacy NAS storage systems were not designed for the diverse workloads found in EDA today such as complex directory structures at massive scale, metadata-heavy I/O (input-output), large and small files, as well as random and sequential access patterns. Front-end and back-end chip design processes have unique storage requirements, so combining I/O and bandwidth intensive workloads on the same storage system often results in huge bottlenecks that delay final tape-out.

New file system technologies are optimized for flash and can accelerate product design workflows

Historically, scale-out NAS has been an attractive solution that kept pace with increasing performance and capacity demands for any industry. However, it comes with compromises such as high management overhead, forklift upgrades, and islands storage. In EDA, each new chip design presents an increase in the amount of storage system capacity and performance required. The number of simulations performed and the amount of data being produced today demand a radical departure from traditional storage architectures in order to maintain productivity.

Many workflows, but especially complex chip designs can benefit greatly from the performance that can be achieved with flash technology. Flash is ideal for front-end design which requires the ability to rapidly process small files. Although scale-out NAS is great for streaming large files common in back-end design, it cannot deliver small file performance at the scales required by today’s designs. EDA increasingly requires storage optimized for the entire design flow.

Increase productivity and future proof your data center

Software-defined storage (SDS)is becoming widely adopted to provide both small and large file performance at low latencies without the cost, complexity, and performance limitations of legacy external storage systems. According to several analysts, the global SDS market could be as large as $40B. File based SDS, worth approximately $7B. Most applications still require a file system to organize and store the data. Inefficient disk operations cost precious time, leading to idle workers and lost productivity.

A critical component to a high performance SDS solution is the underlying file system. The highest performing SDS solution is based on a parallel, distributed file system, one that dynamically and independently scales both performance and capacity and has been designed for flash technology. Designing for flash means that data is stored in the same format used by the flash device, greatly improving storage efficiency, performance, and ultimately, worker productivity. Flash memory is the key to achieving the low latency, small file performance that trading systems, databases, and EDA simulation tools rely on.

SDS makes economic sense with dynamic tuning in response to productivity demands

A key advantage of SDS solutions is their flexibility to run either alongside your applications sharing the same infrastructure (known as hyper converged) or run separately on dedicated hardware. In contrast, traditional NAS uses rigid configurations that run on specialized hardware and don’t scale, resulting in wasted IT resources. This does not mean that NAS systems are not useful; quite the contrary, legacy NAS devices can be repurposed as a more economical tier of storage for applications that do not require the extreme performance of flash. Inactive data can be moved from the performance (flash) tier to slower, more economical NAS for long term storage.

Productivity stems from operational efficiency. In EDA, storage solutions that deliver on-demand performance and capacity during peak simulation period scan have a tremendous impact on an organization’s ability to achieve on-time chip delivery. By avoiding rigid, hardware based storage architectures, designers can achieve breakthrough storage system performance at low latencies and a much-reduced cost. When verification and simulation comprise 60% of the chip design cycle, it makes sense to target these areas to improve operational efficiency. EDA organizations must ask themselves, if we could reduce this time by 30% to 50%, how much more productive could we be and what could that mean for our bottom line?

Read Also

Employing Cloud Technology for Effective Learning

Employing Cloud Technology for Effective Learning

Jeffrey Cepull, CIO and VP for Information Resources, Philadelphia University