Home Back

Addressing AI While Keeping the MIPSiness In MIPS

eetimes.com 2024/10/4

SANTA CLARA, Calif. â MIPS, now targeting AI applications for its application-specific data movement cores, is evolving with a careful eye on its strengths. âœMIPS had a choice to make, because most of our RISC-V competitors are also publicly, or not publicly, pivoting hard towards AI,❠MIPS CEO Sameer Wasson told EE Times. âœThe choice we made was to look at the problems others are not solving well and try to match them with what we can do better.â

For MIPS, this means data movementâsomething both deeply embedded in MIPS♠history and expertise, and absolutely critical to performant AI chips and systems.

âœThe problem we want to solve is to build the best data processing engine,❠Wasson said. âœItâ™s a mission which may not have the buzz to it [versus AI IP], relatively speaking, but Iâ™m very comfortable with it, frankly, because it allows us to fly under the radar.â

Sameer Wasson MIPS CEO
Sameer Wasson (Source: TI)

Customers have been building their own proprietary cores for data movement for a long time, he added. MIPS hopes to replace these proprietary cores.

By Shanghai Yongming Electronic Co.,Ltd  07.01.2024

By Dr. Larry Zu, CEO, Sarcina Technology  06.27.2024

By Hynetek Semiconductor  06.20.2024

âœ[AI] architecture needs to evolve,❠Wasson said. “The data movement engine becomes a DPU and offloading from the CPU or GPU becomes key. Thatâ™s how weâ™re going to be able to do 300 Gb/s or 3 Tb/s or whatever is needed.â

More efficient data movement can help tackle power consumption in the data center by improving utilization of CPUs, GPUs, and accelerators, and improving thermal considerations by making data movement more efficient.

MIPS sees opportunities for its DPU cores in several places in a data center AI system today. This includes offloading data movement from host CPUs, using parallelism and multithreading to do inline processing of network data, like one of MIPS♠smartNIC customers does. Emerging applications for data movement include AI memories and storage, alongside GPUs and custom AI accelerators.

Wasson is particularly excited about the potential for new memory technologiesâintelligent CXL fabrics or intelligent DIMMsâwhere MIPS♠multi-threaded, PPA optimized RISC-V cores work well.

âœSo far we have been bringing data over to the control side to do the processing,❠he said.  âœWhat if we took the controller over to the data? This is near-memory computeâ¦the TAM [total available market] is huge.â

The vision is to embed small compute cores into the memoryânot the other way around. With CXL-enabled memory pooling becoming a reality, there is an opportunity to do some pre-processing, such as traffic shaping and prioritization.

âœThink of pipelining at a system level,❠he said. âœWhat data is going to be needed first? What data will be needed next? Even if you can shave only microseconds off a transaction, that adds up, given the number of transactions, so you start getting CPU utilization back up, which reduces the number of CPUs you need, which reduces the power you need.â

Data center customers see memory as both a CapEx and OpEx problem today, Wasson said, with OpEx particularly poor when pools of memory sit idle waiting for compute, and vice versa.

âœYou couldnâ™t put an x86 core in there, because that would still be a big core,❠he said. âœThink about small processing tasksâdata oriented, real-time processing tasks. Thatâ™s whatâ™s going to emerge.â

Storage is a similarly big opportunity for efficient AI data movement, he said.

GPUs and AI accelerators are an emerging opportunity. Processing in a GPU is split into scalar, vector and matrix multiplication. Matrix multiplication acceleration gets a lot of attention, but what about the scalar part?

âœIn many ways, scalar is the most boring part, but it is also the most difficult part in many ways, because only three companies do it,❠Wasson said. âœIf you can cater to the emerging market of custom accelerators but standardize the programming model, youâ™ll start catering to the largest problem out there, which is software, not hardware.â

Data movement

The main features of MIPS cores include hardware multithreading capability and tightly-coupled memory, plus the ability to enable heterogeneous compute and coherent system interconnect. These together make up a quality Wasson likes to call âœMIPSinessâ.

âœThis is basically MIPS♠legacyâMIPSinessâtaken forward,❠he said.

MIPS cores feature hardware multithreading and tightly-coupled memory (Source: MIPS)
MIPS cores feature hardware multi-threading and tightly-coupled memory. (Source: MIPS)

The MIPS data movement solution is usually a cluster of cores, and usually all the same kind of core (MIPS has P-cores, which are out of order, and I-cores, which are in-order) together with the MIPS coherency manager.

âœLarge customers like MIPS because we allow them to hook their custom acceleration into the pipeline in a native format,❠Wasson said, citing autonomous vehicle (AV) chipmaker Mobileye as a customer example. âœThat means better performance, and cost.â

Tightly coupled memories enable low latency for custom accelerators, vector engines, or DSPs, while features like hardware multithreading and hardware virtualization add to flexibility.

All these features are enabled by custom instructions. MIPS is continuing to invest in its tools to allow customers to add their own instructions. This capability was previously used widely with the MIPS ISA.

âœFifteen to twenty percent of my R&D is tooling, but we are not a tools company,❠Wasson said. âœWe are a compute and IP company, and we enable customers with tools so they can write custom instructions, but we still take ownership of delivering performance.â

Wasson added MIPS’ customer engagement model is a key part of its IP.

âœThere is a value chain here, and as a compute IP company, we have to be clear about what value we bring,❠he said. âœWe donâ™t bring value by getting ahead of our customer. Weâ™re an enabling force for customers and I want to make sure thatâ™s where weâ™ll stay.â

RISC-V transition

MIPS pivoted away from the MIPS ISA towards RISC-V in 2018. There are two ways to transition to RISC-V, Wasson said: build a translator on top of your ISA (a six-month effort) or fully transition ( which is more like a six-year effort). MIPS chose the latter.

âœ[Transitioning to RISC-V] was absolutely the right decision,❠Wasson said. âœProprietary architectures existed for legacy reasons, and because hardware engineers run the [semiconductor] world. But our customers are software engineers. And we want to cater to our customer base, plain and simple.â

RISC-V brings the benefits of standardization while allowing sufficiently differentiated implementations that allows MIPS to maintain its MIPSiness, Wasson argued.

âœThere is a lack of education in the market because of how RISC-V has been marketed,❠he said, noting that most peopleâ™s perception is of RISC-V as a potential Arm-killer. âœThis story caters well to the media and the investor base, but I think youâ™re limiting its potential by saying that. The potential is much larger, if you think about what RISC-V can do from a system perspective.â

RISC-V can maintain the heterogeneity of a system while providing a homogenous ISA, he said.

âœIf you want to pivot the system and make it heavy towards data processing, you can,❠he said. âœIf you want to make it heavy towards signal processing, you can. If you want to make it heavy on custom acceleration, you can. So from a software perspective, imagine the simplicity youâ™re bringing in.â

An SoC today might have an Arm core, a DSP and a custom acceleratorâall on different ISAsâpresenting multiple compilers to the software developer. RISC-V can reduce this complexity and ultimately reduce cost, Wasson said.

âœBased on what weâ™re seeing on the customer side, people are starting to use RISC-V to solve pretty much every problem on the SoC,❠he said. âœThis will bring in the next round of innovation, which is about simplifying your software stacks and focusing on the real problems, versus trying to manage multiple stacks.â

While Wasson does not see Arm going anywhere, RISC-V will eventually replace many proprietary ISAs, since customers want standard architectures and standard tools.

Existing MIPS customers will need to recompile for new MIPS RISC-V cores, but Wasson said the transition should be straightforward, given the companyâ™s purposeful design decisions.

âœSoftware is defined for the machine, which is multithreaded, cache-coherent, etc.,❠he said. âœWhen we transitioned from the MIPS ISA to RISC-V ISA, we didnâ™t transition to a generic coreâwe maintained the MIPSiness of it. In some cases even the memory maps are the exact sameâ¦customer application code or firmware they have written and maintained over the years wonâ™t have to change much at all.â

Customer pain points are more commonly around migrating from Arm to RISC-V, he said, though he anticipates that long-term (in the next 7-10 years) migration from Arm will represent only about a third of his customer base. The rest will be people solving new and emerging problems.

Application focus

Part of keeping the MIPSiness is retaining the companyâ™s strong application focus.  For AI data movement, MIPS♠focus is custom offerings for AI in the data center, plus ADAS and AVs.

These segments are split into data movement in the data center for DPU, memory, storage and the emerging GPU/accelerator sector. Automotive applications include latency-focused applications like the software-defined vehicle, electric vehicles and ADAS.

âœUnderstanding these application-oriented things, thatâ™s whatâ™s going to allow us to compete with proprietary architectures, because quite honestly, thatâ™s where youâ™ll find them,❠Wasson said.

Wassonâ™s plan is to restrict MIPS focus to several key applications, and stick to being an IP company (no plans to become a silicon vendor).

âœThis is where being an IP company is helpful,❠he said. âœIf you focus on your strengths and certain applications, you still find a large number of people who want to build that technology, because you will then serve many SoC people and many system people. So your TAM does increase, because you are an IP company.â

In 2018, MIPS was acquired by Wave Computing, one of the first AI chip startups that eventually went bankrupt. MIPS, which had been treated as a separate business unit within Wave, continued to thrive. The company has retained Waveâ™s IPâdoes Wasson have plans to offer an AI accelerator IP core any time soon?

âœOne thing at a time!❠He laughed.

People are also reading