| Page 197 | Kisaco Research

Author:

Shreya Singhal

Applied Generative AI Research Scientist
Claritev

Shreya Singhal is an Applied Generative AI Research Scientist at Claritev, where she works on building and optimizing large-scale AI systems with a focus on LLMs, multimodal models, and AI agents. She holds a Master’s in Computer Science from the University of Texas at Austin and has prior experience across industry and research roles at organizations such as Dell Technologies, Charles Schwab, and IIIT Hyderabad. Her work spans retrieval-augmented generation, prompt engineering, and deploying production-grade AI pipelines, with a passion for advancing the infrastructure that powers generative AI.

Shreya Singhal

Applied Generative AI Research Scientist
Claritev

Shreya Singhal is an Applied Generative AI Research Scientist at Claritev, where she works on building and optimizing large-scale AI systems with a focus on LLMs, multimodal models, and AI agents. She holds a Master’s in Computer Science from the University of Texas at Austin and has prior experience across industry and research roles at organizations such as Dell Technologies, Charles Schwab, and IIIT Hyderabad. Her work spans retrieval-augmented generation, prompt engineering, and deploying production-grade AI pipelines, with a passion for advancing the infrastructure that powers generative AI.

Software Infra

Author:

Aswini Atibudhi

Distinguished Architect
Walmart

Aswini is a Distinguished Architect at Walmart Global Tech, with over 22 years of IT experience in designing scalable AI/ML, micro frontend, microservices, and cloud applications. His professional expertise encompasses diverse domains including  e-commerce, finance, telecom and healthcare meticulously developed through his tenure at Walmart, Cisco, Equinix, Finastra, and TCS. Over seven years at Walmart, he has been a founding member of critical platforms like Last Mile Delivery, Fleet Management, MerchOne, Supplier Portal and several others. As a recognized expert in generative AI, Aswini specializes in leveraging machine learning and large language models to create transformative digital experiences, including personalized content generation and AI-driven customer engagement.

He has received numerous awards including Walmart’s Innovation Award, Equinix’s Top Performer Award, and Cisco’s Group Race Award. With many certifications in AI, machine learning, and cloud technologies, he stays at the forefront of innovation. Known for his strategic insights, Aswini has a proven ability to deliver transformative AI solutions across industries. 

Aswini Atibudhi

Distinguished Architect
Walmart

Aswini is a Distinguished Architect at Walmart Global Tech, with over 22 years of IT experience in designing scalable AI/ML, micro frontend, microservices, and cloud applications. His professional expertise encompasses diverse domains including  e-commerce, finance, telecom and healthcare meticulously developed through his tenure at Walmart, Cisco, Equinix, Finastra, and TCS. Over seven years at Walmart, he has been a founding member of critical platforms like Last Mile Delivery, Fleet Management, MerchOne, Supplier Portal and several others. As a recognized expert in generative AI, Aswini specializes in leveraging machine learning and large language models to create transformative digital experiences, including personalized content generation and AI-driven customer engagement.

He has received numerous awards including Walmart’s Innovation Award, Equinix’s Top Performer Award, and Cisco’s Group Race Award. With many certifications in AI, machine learning, and cloud technologies, he stays at the forefront of innovation. Known for his strategic insights, Aswini has a proven ability to deliver transformative AI solutions across industries. 

As AI infrastructure outgrows tightly coupled systems, we’re witnessing a shift toward openness and modularity in designing full-stack solutions for AI. In this session, we’ll examine the rise of Software-Driven Fabrics (SDF) , a programmable, vendor-neutral control plane for modern AI networking. SDF makes real-time data coordination across heterogeneous accelerators and fabrics possible, offering a new, democratized model for GPU scalability and network resiliency.

Software Infra

Author:

Prashanth Thinakaran

Distinguished AI Infrastructure Engineer
Clockwork Systems

Prashanth Thinakaran is a Distinguished AI Infrastructure Engineer at Clockwork Systems, a pioneer in nano-second precise network telemetry and software-driven resilience that addresses the unprecedented scale, performance, and reliability Modern AI workloads demand from GPU clusters. In this role, he partners with AI Infrastructure teams at enterprises, hyperscalers and neoclouds to increase their visibility into issues impacting cluster uptime and optimize their availability and utilization with Clockwork’s solution. 

Previously, he helped AI-native companies build on cloud-based GPU platforms, providing deep technical guidance on distributed training and inference, multi-node scaling, and performance tuning across complex infrastructure stacks. His Neocloud experience bridged the gap between product engineering and customer enablement, helping fast-moving teams adopt best practices in massive-scale model deployment and operations. Prior to that, Prashanth played a pivotal role at Cerebras Systems -  a market leader in high-speed inferencing, - in the design, and deployment of Condor Galaxy 1, a Wafer-scale supercomputer. His work enabled rapid deployment timelines and seamless scaling of AI infrastructure across globally distributed data centers designed for both Inference and Training.

Prashanth also holds a Ph.D. in Computer Science and Engineering from Penn State, where his research focused on high-performance computing and cloud systems. His academic work has been published in top-tier venues including USENIX NSDI, ACM Middleware, ICDCS, and ACM SoCC. He has authored over a dozen peer-reviewed papers and a book chapter, and served as a reviewer for journals such as IEEE TPDS and TCC. During his Ph.D., he held teaching roles in systems programming, and computer architecture, and collaborated with Intel, VMware, and Adobe Research through internships, solving systems challenges at the intersection of academia and industry.



Prashanth Thinakaran

Distinguished AI Infrastructure Engineer
Clockwork Systems

Prashanth Thinakaran is a Distinguished AI Infrastructure Engineer at Clockwork Systems, a pioneer in nano-second precise network telemetry and software-driven resilience that addresses the unprecedented scale, performance, and reliability Modern AI workloads demand from GPU clusters. In this role, he partners with AI Infrastructure teams at enterprises, hyperscalers and neoclouds to increase their visibility into issues impacting cluster uptime and optimize their availability and utilization with Clockwork’s solution. 

Previously, he helped AI-native companies build on cloud-based GPU platforms, providing deep technical guidance on distributed training and inference, multi-node scaling, and performance tuning across complex infrastructure stacks. His Neocloud experience bridged the gap between product engineering and customer enablement, helping fast-moving teams adopt best practices in massive-scale model deployment and operations. Prior to that, Prashanth played a pivotal role at Cerebras Systems -  a market leader in high-speed inferencing, - in the design, and deployment of Condor Galaxy 1, a Wafer-scale supercomputer. His work enabled rapid deployment timelines and seamless scaling of AI infrastructure across globally distributed data centers designed for both Inference and Training.

Prashanth also holds a Ph.D. in Computer Science and Engineering from Penn State, where his research focused on high-performance computing and cloud systems. His academic work has been published in top-tier venues including USENIX NSDI, ACM Middleware, ICDCS, and ACM SoCC. He has authored over a dozen peer-reviewed papers and a book chapter, and served as a reviewer for journals such as IEEE TPDS and TCC. During his Ph.D., he held teaching roles in systems programming, and computer architecture, and collaborated with Intel, VMware, and Adobe Research through internships, solving systems challenges at the intersection of academia and industry.



Systems Optimization
Memory

 2025-2035 and beyond is AI calling in the Physical world. AI will scale and touch all of our lives and everything we interact with. Sima.ai are focused on leading the Physical AI era.

In this session, you’ll learn about:

  • The difference between Physical AI and its scaling in various verticals
  • SiMa.ai and its technology enablement for customers focused on Physical AI
  • Customer proof points (presented by customers) embarking on this transition

Author:

Krishna Rangasayee

Founder and CEO
SiMa.ai

Krishna Rangasayee is Founder and CEO of SiMa.ai. Previously, Krishna was COO of Groq and at Xilinx for 18 years, where he held multiple senior leadership roles including Senior Vice President and GM of the overall business, and Executive Vice President of global sales. While at Xilinx, Krishna grew the business to $2.5B in revenue at 70% gross margin while creating the foundation for 10+ quarters of sustained sequential growth and market share expansion. Prior to Xilinx, he held various engineering and business roles at Altera Corporation and Cypress Semiconductor. He holds 25+ international patents and has served on the board of directors of public and private companies.

Krishna Rangasayee

Founder and CEO
SiMa.ai

Krishna Rangasayee is Founder and CEO of SiMa.ai. Previously, Krishna was COO of Groq and at Xilinx for 18 years, where he held multiple senior leadership roles including Senior Vice President and GM of the overall business, and Executive Vice President of global sales. While at Xilinx, Krishna grew the business to $2.5B in revenue at 70% gross margin while creating the foundation for 10+ quarters of sustained sequential growth and market share expansion. Prior to Xilinx, he held various engineering and business roles at Altera Corporation and Cypress Semiconductor. He holds 25+ international patents and has served on the board of directors of public and private companies.

In this session, learn how Amazon SageMaker HyperPod delivers a highly resilient and performant infrastructure purpose-built for training foundation models at scale. We will explore the latest HyperPod innovations that leading AI model development organizations such as Perplexity, Stability AI, and Hugging Face leverage to build state-of-the-art models. You will also discover how to efficiently build your own foundation models that work on your private data by customizing Amazon Nova or popular open-weight models like Llama. Whether you're fine tuning a model or building one from scratch, Amazon SageMaker AI makes it fast, cost-effective and scalable.

Software Infra

Author:

Sumit Thakur

Senior Manager, AI Product Management
AWS

Sumit Thakur

Senior Manager, AI Product Management
AWS
Systems Optimization

Author:

Haseeb Budhani

Co-Founder & CEO
Rafay Systems

Haseeb Budhani is the CEO and co-founder of Rafay Systems. He previously co-founded and led Soha Systems, which was acquired by Akamai Technologies, where he later served as Vice President of Enterprise Strategy. Haseeb has also held executive and leadership roles at Infineta Systems, NET, and several other technology companies. He holds an MBA from UC Berkeley’s Haas School of Business and a B.S. in Computer Science from the University of Southern California.

Haseeb Budhani

Co-Founder & CEO
Rafay Systems

Haseeb Budhani is the CEO and co-founder of Rafay Systems. He previously co-founded and led Soha Systems, which was acquired by Akamai Technologies, where he later served as Vice President of Enterprise Strategy. Haseeb has also held executive and leadership roles at Infineta Systems, NET, and several other technology companies. He holds an MBA from UC Berkeley’s Haas School of Business and a B.S. in Computer Science from the University of Southern California.

Author:

Vinesh Sukumar

VP of Product Management for AI
Qualcomm

Vinesh Sukumar currently serves as VP of Product Management for AI at Qualcomm Technologies, Inc (QTI).  In this role, he leads AI product definition, strategy and solution deployment across multiple business units.

He has about 20 years of industry experience spread across research, engineering and application deployment. He currently holds a doctorate degree specializing in imaging and vision systems while also completing a business degree focused on strategy and marketing. He is a regular speaker in many AI industry forums and has authored several journal papers and two technical books.

Vinesh Sukumar

VP of Product Management for AI
Qualcomm

Vinesh Sukumar currently serves as VP of Product Management for AI at Qualcomm Technologies, Inc (QTI).  In this role, he leads AI product definition, strategy and solution deployment across multiple business units.

He has about 20 years of industry experience spread across research, engineering and application deployment. He currently holds a doctorate degree specializing in imaging and vision systems while also completing a business degree focused on strategy and marketing. He is a regular speaker in many AI industry forums and has authored several journal papers and two technical books.

Author:

Sebastien Jean

CTO
Phison

Sebastien Jean is the Chief Technology Officer at Phison Electronics, where he focuses on developing technology strategy and building alliances with other innovative companies.
His current focus includes AI, Space, Security, and Enterprise solutions.
Sebastien also works closely with engineering teams to help integrate new concepts into products. With 26 years of experience and over 30 filed patents, he has established himself as a thought leader in the storage industry.
Before joining Phison, he held senior technology positions at Micron, SanDisk, and Western Digital. He earned a BS in Computer Science at the University of Ottawa (Canada).

Sebastien Jean

CTO
Phison

Sebastien Jean is the Chief Technology Officer at Phison Electronics, where he focuses on developing technology strategy and building alliances with other innovative companies.
His current focus includes AI, Space, Security, and Enterprise solutions.
Sebastien also works closely with engineering teams to help integrate new concepts into products. With 26 years of experience and over 30 filed patents, he has established himself as a thought leader in the storage industry.
Before joining Phison, he held senior technology positions at Micron, SanDisk, and Western Digital. He earned a BS in Computer Science at the University of Ottawa (Canada).