medium.aiplanet.com Open in urlscan Pro
162.159.153.4  Public Scan

Submitted URL: http://medium.aiplanet.com/
Effective URL: https://medium.aiplanet.com/?gi=94782ee67219
Submission: On March 01 via api from US — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

Open in app

Sign up

Sign in

Write


Sign up

Sign in



132 Followers

Follow

Home

About




Tarun Jain

·Pinned


AI PLANET 2023 RECAP

As we stand on the threshold of bidding farewell to 2023 and eagerly anticipate
the dawn of a new year, it is with great enthusiasm and pride that we reflect
upon the incredible journey that has defined AI Planet over the past twelve
months. GenAI Stack In 2023, we released our most…

AI

10 min read



AI

10 min read





--------------------------------------------------------------------------------

Tarun Jain

·Pinned


INTRODUCING PANDA CODER — AI PLANET’S SERIES OF OPEN SOURCE CODER LLMS

In the fast-paced world of AI, the popularity of large language models has
skyrocketed. As new models emerge at an astonishing pace, one such marvel that’s
captured the imagination of developers and tech enthusiasts is Panda Coder🐼 — a
state-of-the-art Fine Tuned Language Model (LLM) that’s here to change the…

Artificial Intelligence

3 min read



Artificial Intelligence

3 min read





--------------------------------------------------------------------------------

Plaban Nayak

·1 day ago


UNDERSTANDING AND QUERYING CODE: A RAG POWERED APPROACH

What Is Retrieval Augmented Generation? Retrieval Augmented Generation (RAG) is
an AI framework for improving the quality of LLM-generated responses by
grounding the model on external sources of knowledge to supplement the LLM’s
internal representation of information. …

Qdrant

6 min read



Qdrant

6 min read





--------------------------------------------------------------------------------

Tarun Jain

·2 days ago


BUILD END-TO-END RAG WITH GEMMA AND GENAI STACK STUDIO

Want to build an LLM application without writing a single line of code? We've
got you covered. In this article, we will build an end-to-end RAG application
using AI Planet’s GenAI Stack and Google’s Gemma. GenAI Stack We’re super
excited to introduce the GenAI Stack Studio, our latest effort to make the…

Gemma

4 min read



Gemma

4 min read





--------------------------------------------------------------------------------

Plaban Nayak

·Feb 11, 2024


EVALUATING NAIVE RAG AND ADVANCED RAG PIPELINE USING LANGCHAIN V.0.1.0 AND RAGAS

What is RAG(Retrieval Augmented Generation) ? Retrieval Augmented Generation
(RAG) is a natural language processing (NLP) technique that combines two
fundamental tasks in NLP: information retrieval and text generation. It aims to
enhance the generation process by incorporating information from external
sources through retrieval. …

Langchain

25 min read



Langchain

25 min read





--------------------------------------------------------------------------------

Plaban Nayak

·Feb 4, 2024


SETTING UP QUERY PIPELINE FOR ADVANCED RAG WORKFLOW USING LLAMAINDEX

What is QueryPipelines? QueryPipelines is a set of declarative API provided by
llama-index that allows users to connect different components of the RAG
together very easily. QueryPipelines provide a declarative query orchestration
to compose workflows using llama-index modules efficiently with fewer lines of
code. There are two main ways to use a QueryPipelines: …

Llamaindex

29 min read



Llamaindex

29 min read





--------------------------------------------------------------------------------

Plaban Nayak

·Jan 28, 2024


CREATE YOUR OWN MIXTURE OF EXPERTS MODEL WITH MERGEKIT AND RUNPOD

Since the release of Mixtral-8x7B by Mistral AI, there has been a renewed
interest in the mixture of expert (MoE) models. This architecture exploits
expert sub-networks among which only some of them are selected and activated by
a router network during inference. Model merging is a technique that combines
two…

13 min read




13 min read





--------------------------------------------------------------------------------

Plaban Nayak

·Jan 13, 2024


FINE TUNE SMALL MODEL MICROSOFT PHI-2 TO CONVERT NATURAL LANGUAGE TO SQL

What is phi2 ? Mircrosoft phi-2 is a 2.7 billion-parameter language model that
demonstrates outstanding reasoning and language understanding capabilities,
showcasing state-of-the-art performance among base language models with less
than 13 billion parameters. …

Fine Tuning

22 min read



Fine Tuning

22 min read





--------------------------------------------------------------------------------

Plaban Nayak

·Jan 8, 2024


ADVANCED RAG USING LLAMA INDEX

Here we will implement concept to improve retrieval that can be useful for
contect aware text processing where we would also consider the surrounding
context of a sentence to understand valuable insights. What is Llama-Index ?
LlamaIndex is a data framework for LLM -based applications to ingest, structure,
and access private or domain-specific data. How to use Llama-Index ? …

Llamaindex

13 min read



Llamaindex

13 min read





--------------------------------------------------------------------------------

Plaban Nayak

·Jan 5, 2024


CONVERSE WITH IMAGES USING IDEFICS 9B MULTIMODAL LLM AND COMPARE RESULTS WITH
LLAVA AND GPT-4-VISION

IDEFICS (Image-aware Decoder Enhanced à la Flamingo with Interleaved
Cross-attentionS) is an open-access reproduction of Flamingo, a closed-source
visual language model developed by Deepmind. Like GPT-4, the multimodal model
accepts arbitrary sequences of image and text inputs and produces text outputs.
…

Multimodal Ai

19 min read



Multimodal Ai

19 min read





--------------------------------------------------------------------------------

Plaban Nayak

·Jan 3, 2024


NO CODE LLM FINE TUNING USING AXOLOTL

What is Axolotl ? Axolotl is a tool designed to streamline the fine-tuning of
various AI models, offering support for multiple configurations and
architectures. Features: Train various Huggingface models such as llama, pythia,
falcon, mpt Supports fullfinetune, lora, qlora, relora, and gptq Customize
configurations using a simple yaml file or CLI overwrite Load different dataset…

Axolotl

104 min read



Axolotl

104 min read





--------------------------------------------------------------------------------

Plaban Nayak

·Dec 31, 2023


CREATING A NATURAL LANGUAGE TO SQL SYSTEM USING LLAMA INDEX

In recent times, there has been a surge in the popularity of Large Language
Models (LLMs) due to their impressive ability to generate coherent and
contextually relevant text across various domains. …

Llamaindex

10 min read



Llamaindex

10 min read





--------------------------------------------------------------------------------

Plaban Nayak

·Dec 28, 2023


MULTIMODAL RAG USING LANGCHAIN EXPRESSION LANGUAGE AND GPT4-VISION

Many documents contain a mixture of content types including images an texts. Yet
information captured in images is lost in most RAG applications. With the
emergence of multimodal LLMs like (GPT4-V, LLaVA, or FUYU-8b) it is worth
considering how to utilize images in RAG pipeline. There are few options of…

Unstructured

13 min read



Unstructured

13 min read





--------------------------------------------------------------------------------

Plaban Nayak

·Dec 17, 2023


IMPLEMENT CONTEXTUAL COMPRESSION AND FILTERING IN RAG PIPELINE

Contextual Compressors and Filters One of the biggest problems that we can face
in RAG is that what content is actually retrieved by the retrievers. The context
retrieved is not all useful. Only very small amount in the larger chunk passed
has actual information to the overall answer. At times there will be scenarios…

Langchain

22 min read



Langchain

22 min read





--------------------------------------------------------------------------------

Plaban Nayak

·Dec 3, 2023


IMPLEMENT RAG WITH KNOWLEDGE GRAPH AND LLAMA-INDEX

Hallucination is a common problem when working with large language models
(LLMs). LLMs generate fluent and coherent text but often generate inaccurate or
inconsistent information. One of the ways to prevent hallucination in LLMs is by
using external knowledge sources such as databases or knowledge graphs that
provide factual information.

Knowledge Graph

25 min read



Knowledge Graph

25 min read





--------------------------------------------------------------------------------

Plaban Nayak

·Nov 24, 2023


OVERCOME LOST IN MIDDLE PHENOMENON IN RAG USING LONGCONTEXTRETRIVER

In certain aspects, both humans and large language models (LLMs) share a common
behavior pattern: they tend to excel in processing information located at the
beginning or end of a given content, while information in the middle often goes
unnoticed. Researchers from Stanford University, the University of California,
Berkeley, and…

Langchain

57 min read



Langchain

57 min read





--------------------------------------------------------------------------------

Plaban Nayak

·Nov 11, 2023


IMPLEMENTING RAG USING LANGCHAIN OLLAMA AND CHAINLIT ON WINDOWS USING WSL

What is Ollama ? Ollama empowers you to acquire the open-source model for local
usage. It automatically fetches models from optimal sources and, if your
computer has a dedicated GPU, it seamlessly employs GPU acceleration without
requiring manual configuration. Customizing the model is easily achievable by
modifying the prompt, and Langchain is not a…

Ollama

15 min read



Ollama

15 min read





--------------------------------------------------------------------------------

Plaban Nayak

·Nov 4, 2023


ADVANCED RAG — IMPROVING RETRIEVAL USING HYPOTHETICAL DOCUMENT EMBEDDINGS(HYDE)

What is HyDE ? HyDE uses a Language Learning Model, like ChatGPT, to create a
theoretical document when responding to a query, as opposed to using the query
and its computed vector to directly seek in the vector database. It goes a step
further by using an unsupervised encoder learned through contrastive methods.
This…

Langchain

140 min read



Langchain

140 min read





--------------------------------------------------------------------------------

Plaban Nayak

·Oct 29, 2023


ADVANCED RAG- PROVIDING BROADER CONTEXT TO LLMS USING PARENTDOCUMENTRETRIEVER

Traditional RAG Paradigm RAG represents a fusion of retrieval systems and LLMs.
While LLMs demonstrate proficiency in creating content according to context, RAG
supports them by identifying the precise context from various data sources. …

Langchain

10 min read



Langchain

10 min read





--------------------------------------------------------------------------------

Plaban Nayak

·Oct 24, 2023


ADVANCED RAG- COHERE RE-RANKER

LLMs can acquire new information in at least two ways: Weight updates (e.g.,
fine-tuning) RAG (retrieval augmented generation) Retrieval-augmented generation
(RAG) is the practice of extending the “memory” or knowledge of LLM by providing
access to information from an external data source. The typical RAG process is
as follows: The user asks a question or provides an…

Rag

13 min read



Rag

13 min read





Ecosystem educating and building AI for All

Follow


EDITORS


CHANUKYA PATNAIK

Entrepreneur | Data Scientist | Marketer. Engineering the future of AI @AI
Planet

Follow


NIKHIL CHINTAWAR

Software Engineer at AI Planet

Follow


PLABAN NAYAK

Machine Learning and Deep Learning enthusiast

Follow

See all



Help

Status

About

Careers

Blog

Privacy

Terms

Text to speech

Teams

To make Medium work, we log user data. By using Medium, you agree to our Privacy
Policy, including cookie policy.