Copy
View this email in your browser

Neurosymbolic AI, Validating External Data, Stress-Testing AI - Three More Challenges addressed at
MLW's HYBRID AI 2026

 

Next week in San Francisco

Only a week until May 5 when HYBRID AI 2026 kicks off in San Francisco with one goal in mind: helping you achieve the AI value everyone talks about but few realize.

Operationalizing Knowledge: A Practical Path to Neurosymbolic AI

Most organizations are rich in data but poor in meaning. Knowledge lives in people’s heads, scattered documents, and disconnected systems — and every new AI initiative hits the same wall: missing context, answers no one can verify, and results that can’t be audited or explained. Garret makes the case for Neurosymbolic AI as the practical path forward: a deliberate pairing of machine learning and generative AI with the rigor of explicit knowledge and rules. The organizations getting real value from AI today aren’t choosing between traditional ML, LLMs, and symbolic methods — they’re combining them. ML and generative models do what they do best: finding patterns, structuring messy inputs, forecasting, and powering natural language interaction. A semantic layer captures what the business actually knows, and decision logic guarantees the steps that have to be right every time. Leave with a clear understanding of what Neurosymbolic AI is and why it matters, a framework for thinking about where ML and GenAI belong in a solution and where deterministic logic belongs, and a practical starting point for building hybrid AI systems in their own organization that are accurate, explainable, and worthy of trust.

More info here.

Validating External Data at Scale: a Solution to the Latest Challenges

Companies often buy data from external sources to build better models. Failure to validate external data can lead to model failures and create a need for rework – for both genAI and predictive AI. James will discuss strategies for validating external data and demonstrate some tools that can make this process faster and more effective. He will also discuss ways in which AI currently can help with this, and ways it might help in the future.

More info here.

Stress-Testing AI: How CSAA Built an Independent Model Validation Function to Catch Risk Before It Reaches Production

As organizations scale their use of machine learning and generative AI, the cost of model failure grows. CSAA Insurance Group built an Independent Model Validation (IMV) team to evaluate models across multiple risk dimensions: bias, fairness, robustness, explainability, and real-world impact. In this session, Bipin and Aaron will share how CSAA operationalized scalable model validation across supervised models, large language models, and vendor-built systems. You’ll hear real examples of model flaws they’ve caught and how they balance rigor with innovation. Model validation isn’t a gate, it’s how machine learning becomes safe, fair, and launch-ready.

More info here.

See the Complete Agenda


What's different about Hybrid AI 2026?

At this event, we're covering new methods that take on today's most vital AI mission: closing the gap between AI hype and AI's practical, realized value. 

How can practitioners get genAI pilots to production – and get predictive AI from development to deployment – considering that the success rates are still extremely low?

Join the leading experts tackling these issues at HYBRID AI 2026. Take the opportunity to talk to them, as well as listen and and find the solutions to some of your key 2026 challenges in machine learning, hybrid AI and predictive analytics.

Register Now
LinkedIn
Facebook
Twitter

This email was sent by: Rising Media, Inc., 1221 State Street, Suite 12, Santa Barbara, CA 93190

You can unsubscribe from this list.