SC2023

Highlights from the International Conference for High Performance Computing, Networking, Storage, and Analysis.

white-frame

Unveiling the world’s first look at Affordable Intelligence.

We were thrilled to participate in SC23 at the fabulous Colorado Convention Center in Denver last month to launch and demonstrate our complete NR1™ AI Inference Solution – from silicon to software. Our two main hardware products – the NR1-S™ AI Inference Appliance and NR1-M™ AI Inference Module offer profitable performance, customer choice and flexibility. Both contain the world’s first AI-centric Network Addressable Processing Unit or NAPU™.

As of mid-December, our finished products have shipped from TSMC manufacturing facilities in Taiwan and will be available directly from NeuReality or through well-established industry partners in January 2024.

Great Partner Demonstrations of AI Inference With & Without CPU’s
Great Partner Demonstrations of AI Inference With & Without CPU’s

Great Partner Demonstrations of AI Inference With & Without CPU’s

We shared much more exciting news at SC23, including partnerships with valued Deep Learning Accelerator leaders such as Qualcomm, AMD and IBM who demonstrated the NR1-S™ on their own DLAs. In other words, a CPU-free server with DLA and NR1-S only to achieve the ideal AI-centric inferencing in the data center.

Lenovo and Supermicro were also at our side as first OEM adopters of the NR1-M for integrated inferencing with a combination of CPU, DLA and NAPU to offload the hardware for 10x performance, lower latency and higher energy-efficiency. A triple win for the deployment of trained AI models – good for linear scalability, good for your wallet and good for the environment.

LEARN MORE ABOUT OUR PARTNERS

white-frame

Celebrating a Great Milestone with ARM!

We couldn’t have achieved this milestone without our partners, and especially Arm. Our NAPU network-attached heterogenous chip is built on Arm Neoverse cores for a versatile and flexible technology platform.

“Congratulations to the NeuReality team. It’s a great milestone you have reached getting your product to market,” said Eddie Ramirez, VP of Marketing for Infrastructure at Arm, who joined Moshe Tanach to toast the teams.

“It’s been a fantastic week for the Arm people. We’re an ecosystem company, we don’t deliver any end-products, so really, we rely on our partners to do the innovation on top of our platform,” added Ramirez.

At that point, the two AI pioneers raised a glass to current and future customers and partners.

 

Thinking Differently: The Future of AI-Centric Data Centers

LIVE SESSION

During the conference, CEO Moshe Tanach held a live talk at the main exhibit hall emphasizing the problems in today’s CPU-centric data centers that were never designed for AI. Indeed, he asserted that the world’s data centers currently utilize only 30 percent of their CPU silicon to full capacity, leaving a significant 70 percent of waste. NeuReality addresses this issue with an exceptionally efficient AI system architecture taking silicon to full performance, where agile NR1 NAPUs now handle tasks that CPUs with GPU add-ons attempt to manage, but inadvertently create bottlenecks—very costly ones at that.

He asked the AI tech community to think differently about the kind of data centers the world needs to support power-hungry AI pipelines and applications of the future.

He made the case for far more efficient AI Inference at scale and highlighted the first movers making bold moves with NeuReality – AMD, IBM, Qualcomm, Lenovo and Supermicro. It takes an industry to make revolutionary change, so he invited technologists, engineers and software developers to join NeuReality to achieve full democratization of AI – affordable, accessible and available to all.

white-frame

Another Leap Forward in the AI Compute Continuum

Remember, for every $1 you spend on training an AI model, you will spend $8 running inference. It’s a growing and compounding problem as the world deploys generative AI and large language models. As our press release cited, NR1 AI Inference Solution processes AI-hungry applications at 10x speed at a fraction of today’s inference costs.

Listen to the next generation of AI technologists talk about it on the TechTechPotato videocast after they visited the NeuReality booth and were wowed by the IBM and Qualcomm demonstrations.

We’ll see you next year at SC24 in Atlanta with more customer stories and benchmarks!

Missed Us at SC23? Connect With Us Now!

Whether you’re handling Computer Vision, Recommendation Engines, Natural Language Processing, Fraud Detection or Financial Risk Modeling today, or getting ready for big, intensive future Generative AI and Large Language Models in the future, let’s talk. We’ll provide you with different ways to take advantage of AI Inference as it was meant to be.

With NeuReality’s performance promise, you can create more and better customer AI experiences to drive revenue, while reducing daily operation costs at the same time.

Find out how. Contact us today.

Want to learn more? We’d love to talk.

Would you like to learn more about NeuReality? Set up time to follow up with our team.

CONTACT US