Copy
View this email in your browser

Managing and Stress Testing AI Agents and the Memory Wall at MLW's HYBRID AI 2026 in San Francisco next month

AI Agents are a key value-add but need supervising and testing. Find our how in these sessions next month in San Francisco. Plus some insight into managing large-scale generative AI infrastructure.

Stress-Testing AI: How CSAA Built an Independent Model Validation Function to Catch Risk Before It Reaches Production

As organizations scale their use of machine learning and generative AI, the cost of model failure grows. CSAA Insurance Group built an Independent Model Validation (IMV) team to evaluate models across multiple risk dimensions: bias, fairness, robustness, explainability, and real-world impact. In this session, Aaron and Bipin will share how CSAA operationalized scalable model validation across supervised models, large language models, and vendor-built systems. You’ll hear real examples of model flaws they’ve caught and how they balance rigor with innovation. Model validation isn’t a gate, it’s how machine learning becomes safe, fair, and launch-ready.

More info here.

When AI Agents Go Rogue: Unmasking Risky Enterprise AI Behavior with Unsupervised Learning

As enterprises rapidly adopt AI agents (e.g., Salesforce’s Agentforce), a critical risk emerges: misconfigured or compromised agents performing anomalous, potentially harmful, data operations. Millie unveils an original, practical methodology for detecting such threats using unsupervised machine learning.

Drawing from a real-world Proof-of-Concept, Millie demonstrates how behavioral profiling—analyzing features engineered from system logs like data access patterns, query syntax (SOQL keyword analysis), and IP usage, along with signals from the content moderation mechanisms embedded within the LLM guardrails such prompt injection detection and toxicity scoring—can distinguish risky agent actions. Explore the creation of 30+ behavioral features and the application of KMeans clustering to identify agents exhibiting statistically significant deviations, serving as an early warning for misuse or overpermissive configurations. Millie will share insights into observed differences between AI agent and human user profiles, and challenges like crucial data gaps that impact comprehensive monitoring.


More info here.

Memory Wall for AI

Modern generative AI systems—from LLMs to multimodal models—are no longer compute-bound; they are memory-bound. As model sizes soar, inference latency is dominated by memory bandwidth, memory fragmentation, KV-cache bloat, checkpoint restore time, and PCIe/NVLink bottlenecks. This session breaks down the “Memory Wall” limiting generative model performance and shares practical techniques such as model compression, quantization, memory-efficient attention, sharding, and cold-start optimization. This talk provides actionable insights for practitioners building large-scale generative AI infrastructure.

More info here.

See the Complete Agenda and speakers from these leading brands:


What's different about Hybrid AI 2026?

At this event, we're covering new methods that take on today's most vital AI mission: closing the gap between AI hype and AI's practical, realized value. 

How can practitioners get genAI pilots to production – and get predictive AI from development to deployment – considering that the success rates are still extremely low?

Join the leading experts tackling these issues at HYBRID AI 2026. Take the opportunity to talk to them, as well as listen and and find the solutions to some of your key 2026 challenges in machine learning, hybrid AI and predictive analytics.

Register Now
LinkedIn
Facebook
Twitter

This email was sent by: Rising Media, Inc., 1221 State Street, Suite 12, Santa Barbara, CA 93190

You can unsubscribe from this list.