blog.gopenai.com Open in urlscan Pro
162.159.153.4  Public Scan

Submitted URL: http://blog.gopenai.com/leveraging-llms-for-causal-reasoning-why-knowledge-and-algorithms-are-key-d1928b7051c7
Effective URL: https://blog.gopenai.com/leveraging-llms-for-causal-reasoning-why-knowledge-and-algorithms-are-key-d1928b7051c7?gi=9c8058...
Submission: On February 19 via api from US — Scanned from US

Form analysis 0 forms found in the DOM

Text Content

Open in app

Sign up

Sign in

Write


Sign up

Sign in



Member-only story


LEVERAGING LLMS FOR CAUSAL REASONING: WHY KNOWLEDGE AND ALGORITHMS ARE KEY

Anthony Alcaraz

·

Follow

Published in

GoPenAI

·
7 min read
·
6 days ago

296

1

Listen

Share

Causal reasoning — the capacity to understand cause-effect relationships and
make inferences about interventions — is fundamental to intelligence. It
underpins decision making by enabling anticipation of consequences.

This skill was long presumed exclusive to humans, requiring years of experience
to develop sophisticated causal models relating diverse real-world concepts.

However, with artificial intelligence now verging on matching certain human
abilities, there is intense focus on replicating causal cognition — a hallmark
of advanced generalized intelligence.

Could AI systems reason about cause and effect given their lack of a lived
experience in the physical world?

Exciting indications of causal abilities have recently emerged from large
language models, trained via self-supervision on textual data alone.

Prompted with events described in natural language text, these systems exhibit
human-like judgments on assessing causality between statement pairs with high
accuracy.

Some models can even determine necessary or sufficient causes with competence
rivaling untrained humans.

These advances offer a promising glimpse into the future where AI assistants
advise professionals by weighing complex causal implications of potential
decisions, social policies consider AI-generated impact assessments before
implementation, and personal agents tailor recommendations to individual
contexts and preferences using personalized causal models.

However, fully realizing this vision requires confronting the limitations of
pure language-model-based approaches.

Truly reliable and versatile real-world causal reasoning demands tightly
integrating multiple modalities — the fluid reasoning capacity supplied by
language models, fused with both structured world knowledge and algorithmic
logic for robust causal intelligence greater than the sum of its parts.

This article explains why combining language models with knowledge graphs and
specialized algorithms is essential for scalable, practical AI causal reasoning
that can tackle ambiguity, understand context, reason dynamically, and
ultimately enhance human decision making with machine-driven causal wisdom.

CREATE AN ACCOUNT TO READ THE FULL STORY.

The author made this story available to Medium members only.
If you’re new to Medium, create a new account to read this story on us.



Continue in app
Or, continue in mobile web



Sign up with Google

Sign up with Facebook

Sign up with email

Already have an account? Sign in





296

296

1


Follow




WRITTEN BY ANTHONY ALCARAZ

31K Followers
·Writer for

GoPenAI

Chief AI Officer & Architect : Builder of Neuro-Symbolic AI Systems @Fribl
enhanced GenAI for HR https://www.linkedin.com/in/anthony-alcaraz-b80763155/

Follow





MORE FROM ANTHONY ALCARAZ AND GOPENAI

Anthony Alcaraz

in

Artificial Intelligence in Plain English


INTEGRATING LARGE LANGUAGE MODELS AND KNOWLEDGE GRAPHS: A NEURO-SYMBOLIC
PERSPECTIVE


·7 min read·Feb 10, 2024

421

4




Júlio Almeida

in

GoPenAI


OPEN-SOURCE LLM DOCUMENT EXTRACTION USING MISTRAL 7B


INTRODUCTION

6 min read·Feb 2, 2024

277

2




Sanjay Singh

in

GoPenAI


A STEP-BY-STEP GUIDE TO TRAINING YOUR OWN LARGE LANGUAGE MODELS (LLMS).


LARGE LANGUAGE MODELS (LLMS) HAVE TRULY REVOLUTIONIZED THE REALM OF ARTIFICIAL
INTELLIGENCE (AI). THESE POWERFUL AI SYSTEMS, SUCH AS GPT-3…

10 min read·Sep 30, 2023

205





Anthony Alcaraz

in

GoPenAI


HOW TO COMBINE RAG, REINFORCEMENT LEARNING, AND KNOWLEDGE GRAPHS FOR MORE ROBUST
AI AGENTS


RECENT MO HAVE SEEN RAPID PROGRESS IN DEVELOPING LARGE LANGUAGE MODELS (LLMS)
THAT DISPLAY IMPRESSIVE CAPABILITIES IN LANGUAGE GENERATION…


·6 min read·Feb 6, 2024

275




See all from Anthony Alcaraz
See all from GoPenAI



RECOMMENDED FROM MEDIUM

The Pareto Investor




I’M SHOCKED AT MICROSOFT’S NEW “AI AGENT FOUNDATION MODEL”


IN AN UNDERSTATED YET IMPACTFUL MOVE, MICROSOFT HAS ROLLED OUT AN INTRIGUING
CREATION: THE INTERACTIVE AGENT FOUNDATION MODEL.


·4 min read·6 days ago

554

4




Rohan Balkondekar


THE FULL TRAINING RUN OF GPT-5 HAS GONE LIVE


WE CAN EXPECT IT TO BE RELEASED IN NOVEMBER, MAYBE ON THE 2ND ANNIVERSARY OF THE
LEGENDARY CHATGPT LAUNCH

4 min read·Jan 27, 2024

828

7





LISTS


PREDICTIVE MODELING W/ PYTHON

20 stories·917 saves


PRACTICAL GUIDES TO MACHINE LEARNING

10 stories·1079 saves


NATURAL LANGUAGE PROCESSING

1205 stories·686 saves


THE NEW CHATBOTS: CHATGPT, BARD, AND BEYOND

12 stories·307 saves


Teemu Sormunen

in

DataDrivenInvestor


IMPROVE RAG PERFORMANCE ON CUSTOM VOCABULARY


VECTOR SEARCH FAILS WITH CUSTOM DATA — FIND OUT WHY

14 min read·Feb 8, 2024

150

2




BoredGeekSociety


FINALLY! 7B PARAMETER MODEL BEATS GPT-4!


WE ARE ENTERING THE ERA OF SMALL & HIGHLY EFFICIENT MODELS!


·2 min read·Feb 6, 2024

686

7




Mariya Mansurova

in

Towards Data Science


TEXT EMBEDDINGS: COMPREHENSIVE GUIDE


EVOLUTION, VISUALISATION, AND APPLICATIONS OF TEXT EMBEDDINGS

20 min read·6 days ago

853

15




Thomas Reid



in

Level Up Coding


CREWAI — IS THIS THE AUTOGEN KILLER?


TL;DR — I THINK IT MIGHT BE!


·15 min read·Feb 11, 2024

308

2



See more recommendations

Help

Status

About

Careers

Blog

Privacy

Terms

Text to speech

Teams