content.dataiku.com Open in urlscan Pro
18.235.231.235  Public Scan

Submitted URL: https://pages.dataiku.com/e3t/Ctc/GA+113/cfvmy04/VW_gYs5jMn3WVK-7_m3xTlPxW7swsJQ5fqDN0N56dXgq5nR32W50kH_H6lZ3q5W7Y6PD16DFY...
Effective URL: https://content.dataiku.com/the-llm-cheatsheet-bundle?utm_campaign=GLO+CONTENT+DB+Emails+Unengaged+2024&utm_medium=email&_hs...
Submission: On May 23 via api from IN — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

Miniaturansichten Dokumentstruktur Anhänge Ebenen

Aktuelles Struktur-Element
Definitions
Transformers
Types of LLMs
Configuration settings



Zurück

Weiter
Alle hervorheben Groß-/Kleinschreibung beachten
Akzente Ganze Wörter

Farbe
Größe
Farbe
Dicke
Deckkraft
Präsentationsmodus Öffnen Drucken Speichern Aktuelle Ansicht

Erste Seite anzeigen Letzte Seite anzeigen

Im Uhrzeigersinn drehen Gegen Uhrzeigersinn drehen

Textauswahl-Werkzeug Hand-Werkzeug

Einzelseitenanordnung Vertikale Seitenanordnung Horizontale Seitenanordnung
Kombinierte Seitenanordnung

Einzelne Seiten Ungerade + gerade Seite Gerade + ungerade Seite

Dokumenteigenschaften…
Sidebar umschalten

Suchen
Zurück

Vor
von 1
Präsentationsmodus Öffnen Drucken Speichern Aktuelle Ansicht

FreeText-Annotation Ink-Annotation

Werkzeuge
Verkleinern

Vergrößern
Automatischer Zoom Originalgröße Seitengröße Seitenbreite 50 % 75 % 100 % 125 %
150 % 200 % 300 % 400 %

DEFINITIONS
Generative AI AI systems that can produce
realistic content (text, image, etc.)
Large Language Models ( LLMs)
Large neural networks trained at internet scale
to estimate the probability of sequences
of words
Ex: GPT, FLAN-T5, LLaMA, PaLM, BLOOM
(transformers with billions of parameters)
Abilities (and computing resources needed)
tend to rise with the number of parameters
USE CASES
– Standard NLP tasks
(classification, summarization, etc.)
– Content generation
– Reasoning (Q&A, planning, coding, etc.)
In-context learning Specifying the task
to perform directly in the prompt
Introduction to LLMs
TRANSFORMERS
– Can scale efficiently to use multi-core GPUs
– Can process input data in parallel
– Pay attention to all other words
when processing a word
Transformers’ strength lies in understanding
the context and relevance of all words
in a sentence
Token Word or sub-word
The basic unit processed by transformers
Encoder Processes input sequence
to generate a vector representation (or
embedding) for each token
Decoder Processes input tokens to produce
new tokens
Embedding layer Maps each token
to a trainable vector
Positional encoding vector
Added to the token embedding vector
to keep track of the token’s position
Self-Attention Computes the importance
of each word in the input sequence to all
other words in the sequence
TYPES OF LLMS
Encoder only = Autoencoding model
Ex: BERT, RoBERTa
These are not generative models.

 
PRE-TRAINING OBJECTIVE To predict tokens masked
in a sentence (= Masked Language Modeling)
OUTPUT Encoded representation of the text
USE CASE(S) Sentence classification (e.g., NER)
Decoder only = Autoregressive model
Ex: GPT, BLOOM

 
PRE-TRAINING OBJECTIVE To predict the next token
based on the previous sequence of tokens
(= Causal Language Modeling)
OUTPUT Next token
USE CASES Text generation
Encoder-Decoder = Seq-to-seq model
Ex: T5, BART
  

        
 
PRE-TRAINING OBJECTIVE Vary from model to model
(e.g., Span corruption like T5)
OUTPUT Sentinel token + predicted tokens
USE CASES Translation, Q&A, summarization
CONFIGURATION SETTINGS
Parameters to set at inference time
Max new tokens Maximum number of tokens
generated during completion
Decoding strategy
1 Greedy Decoding The word/token with the
highest probability is selected from the final
probability distribution (prone to repetition)
 

 

2 Random Sampling The model chooses
an output word at random using the probability
distribution to weigh the selection (could be
too creative)
TECHNIQUES TO CONTROL RANDOM SAMPLING
– Top K The next token is drawn from
the k tokens with the highest probabilities
– Top P The next token is drawn from
the tokens with the highest probabilities,
whose combined probabilities exceed p
 
 



Temperature Influence the shape of
the probability distribution through a scaling
factor in the softmax layer


 





 
      
  
   
 
  

 
 






 


 
 
 

 
 

 
 
   
     

 
 

  
 

  


 
 

  


  

© 2024 Dataiku

Mehr Informationen Weniger Informationen
Schließen

Geben Sie zum Öffnen der PDF-Datei deren Passwort ein.

Abbrechen OK
Dateiname:

-

Dateigröße:

-


Titel:

-

Autor:

-

Thema:

-

Stichwörter:

-

Erstelldatum:

-

Bearbeitungsdatum:

-

Anwendung:

-


PDF erstellt mit:

-

PDF-Version:

-

Seitenzahl:

-

Seitengröße:

-


Schnelle Webanzeige:

-

Schließen
Dokument wird für Drucken vorbereitet…
0 %
Abbrechen

Next 
Next 

Cheatsheet: LLM-Powered Applications
In this cheatsheet, discover insights on model optimization for deployment,
LLM-integrated applications, LLM reasoning, program-aided language and more.
LinkedIn LinkTwitter LinkFacebook LinkEmail LinkLike Button
EXPLORE USE CASE LIBRARY

pdf:Cheatsheet: Introduction to LLMs

pdf:Cheatsheet: LLM-Powered Applications

pdf:Cheatsheet: LLM Compute Challenges and Scaling Laws

pdf:Cheatsheet: Parameter Efficient Fine-Tuning (PEFT) Methods

pdf:Cheatsheet: LLM-Instruction Fine-Tuning & Evaluation

pdf:Cheatsheet: LLM Preference Fine-Tuning (Part 1)

pdf:Cheatsheet: LLM Preference Fine-Tuning (Part 2)




GET MORE CONTENT FROM DATAIKU!

Sign up for our newsletter for exclusive updates on just-released content,
Dataiku product announcements, and more