notes.roydipta.com Open in urlscan Pro
2600:1f18:16e:df00::64  Public Scan

URL: https://notes.roydipta.com/
Submission: On January 16 via api from US — Scanned from US

Form analysis 0 forms found in the DOM

Text Content

GARDEN OF 📝


Search CTRL + K



GARDEN OF 📝


Search CTRL + K
Home
Literature Notes
Advanced NLP with Scipy
Deep Learning by Ian Goodfellow
DS & Algo Interview
How To 100M Learning Text Video
How to Read a Paper
How To Write a Paper
ML Interview
Papers
MultiVENT
Templates
Paper Template
Permanent Notes
Topic Template
Topics
activation-function
algorithm
deep-learning
interview
loss-in-ml
machine-learning
math
nlp
paper
probability
statistics
vision
Zettelkasten
Accuracy
Activation Function
Active Learning
AdaBoost vs. Gradient Boosting vs. XGBoost
Adaboost
Adjusted R-squared Value
AUC Score
Autoencoder for Denoising Images
Autoencoder
Averaging in Ensemble Learning
Bag of Words
Bagging
Batch Normalization
Bayes Theorem
Bayesian Optimization Hyperparameter Finding
Beam Search
Behavioral Interview
BERT
Bias & Variance
Bidirectional RNN or LSTM
Binary Cross Entropy
Binning or Bucketing
Binomial Distribution
bisect_left vs. bisect_right
BLEU Score
Boosting
Causality vs. Correlation
Central Limit Theorem
Chain Rule
CNN
Co-Variance
Collinearity
Conditional Probability
conditionally-independent-joint-distribution
Confusion Matrix
Connections - Log Likelihood, Cross Entropy, KL Divergence, Logistic Regression,
and Neural Networks
Continuous Random Variable
Contrastive Learning
Contrastive Loss
Convex vs Nonconvex Function
Cosine Similarity
Cross Entropy
Cross Validation
Curse of Dimensionality
Data Augmentation
Data Imputation
Data Normalization
DBScan Clustering
Debugging Deep Learning
Decision Boundary
Decision Tree (Classification)
Decision Tree (Regression)
Decision Tree
Density Sparse Data
Dependent Variable
Derivative
determinant
diagonal-matrix
Differentiation of Product
Differentiation
Digit Dp
Dimensionality Reduction
Discrete Random Variable
Discriminative vs. Generative Models
doing-literature-review
Domain vs. Codomain vs. Range
Dropout
Dying ReLU
Dynamic Programming (DP) in python
Eigendecomposition
eigenvalue-eigenvector
Elastic Net Regression
Ensemble Learning
Entropy and Information Gain
Entropy
Estimated Mean
Estimated Standard Deviation
Estimated Variance
Euclidian Norm
Expected Value for Continuous Events
Expected Value for Discrete Events
Expected Value
Exploding Gradient
Exponential Distribution
F-Beta Score
F1 Score
False Negative Error
False Positive Rate
Feature Engineering
Feature Extraction
Feature Preprocessing
Feature Selection
Finding Co-relation between two data or distribution
frobenius-norm
fully-independent-join-distribution
fully-joint-joint-distribution
Gaussian Distribution
GBM
Genetic Algorithm Hyperparameter Finding
Gini Impurity
Global Minima
Gradient Boost (Classification)
Gradient Boost (Regression)
Gradient Boosting
Gradient Descent
Gradient
Graph Convolutional Network (GCN)
Greedy Decoding
Grid Search Hyperparameter Finding
GRU
Handling Imbalanced Dataset
Handling Missing Data
Handling Outliers
Heapq (nlargest or nsmalles)
Hierarchical Clustering
Hinge Loss
Histogram
How to Choose Kernel in SVM
How to combine in Ensemble Learning
How to prepare for Behavioral Interview
how-to-read-paper
Huber Loss
Hyperparameters
Hypothesis Testing
identity-matrix
Independent Variable
InfoNCE Loss
Integration by Parts or Integration of Product
Internal Covariate Shift
Interview Scheduling
Interview
joint-distribuition
jupyter-notebook-on-server
K Fold Cross Validation
K-means Clustering
K-means vs. Hierarchical
K-nearest Neighbor (KNN)
Kernel in SVM
Kernel Regression
Kernel Trick
KL Divergence
L1 or Lasso Regression
L1 vs. L2 Regression
L2 or Ridge Regression
Learning Rate Scheduler
LightGBM
Likelihood
Line Equation
Linear Regression
Local Minima
Log (Odds)
Log Scale
Log-cosh Loss
Logistic Regression vs. Neural Network
Logistic Regression
Loss vs. Cost
lp-norm
LSTM
Machine Learning Algorithm Selection
Machine Learning vs. Deep Learning
Majority vote in Ensemble Learning
Margin in SVM
Marginal Probability
Matrices
max-norm
Maximal Margin Classifier
Maximum Likelihood
Mean Absolute Error (MAE)
Mean Absolute Percentage Error (MAPE)
Mean Squared Error (MSE)
Mean Squared Logarithmic Error (MSLE)
Mean
Median
Merge K-sorted List
Merge Overlapping Intervals
Meteor Score
Mini Batch SGD
ML System Design
Mode
Model Based vs. Instance Based Learning
Multi Class Cross Entropy
Multi Label Cross Entropy
Multi Layer Perceptron
Multicollinearity
Multivariable Linear Regression
Multivariate Linear Regression
Multivariate Normal Distribution
Mutual Information
N-gram Method
Naive Bayes
Negative Log Likelihood
Neural Network
norm
Normal Distribution
Null Hypothesis
Odds
One Class Classification
One Class Gaussian
One vs One Multi Class Classification
One vs Rest or One vs All Multi Class Classification
Optimizers
orthogonal-matrix
orthonormal-vector
Overcomplete Autoencoder
Overfitting
Oversampling
p-value
Padding in CNN
Parameter vs. Hyperparameter
PCA vs. Autoencoder
Pearson Correlation
Perceptron
Permutation
Perplexity
Plots Compared
Pooling
Population
Posterior Probability
Precision
Principal Component Analysis (PCA)
Prior Probability
Probability Density Function
Probability Distribution
Probability Mass Function
Probability vs. Likelihood
Problem Solving Algorithm Selection
Pruning in Decision Tree
PyTorch Loss Functions
Questions to ask in a Interview?
Quintile or Percentile
Quotient Rule or Differentiation of Division
R-squared Value
Random Forest
Random Variable
Recall
Regularization
Reinforcement Learning
Relational GCN
ReLU
RNN
ROC Curve
Root Mean Squared Error (RMSE)
Root Mean Squared Logarithmic Error (RMSLE)
ROUGE-L Score
ROUGE-LSUM Score
ROUGE-N Score
Saddle Points
scalar
Second Order Derivative or Hessian Matrix
Semi-supervised Learning
Sensitivity
Sigmoid Function
Simple Linear Regression
Singular Value Decomposition (SVD)
Soft Margin in SVM
Softmax
Softplus
Softsign
Some Common Behavioral Questions
Sources of Uncertainty
spacy-doc-object
spacy-doc-span-token
spacy-explanation-of-labels
spacy-matcher
spacy-named-entities
spacy-operator-quantifier
spacy-pattern
spacy-pipeline
spacy-pos
spacy-semantic-similarity
spacy-syntactic-dependency
Specificity
Splitting tree in Decision Tree
Stacking or Meta Model in Ensemble Learning
Standard deviation
Standardization or Normalization
Standardization
Statistical Significance
Stochastic Gradient Descent or SGD
Stride in CNN
Stump
Supervised Learning
Support Vector Machine (SVM)
Support Vector
Surprise
SVC
Swallow vs. Deep Learning
Tanh
Text Preprocessing
TF-IDF
Three Way Partioning
trace-operator
Training a Deep Neural Network
Transformer Timeline
Triplet Loss
True Positive Rate
Two Pointer
Undercomplete Autoencoder
Undersampling
unit-vector
Unsupervised Learning
Untitled
Vanishing Gradient
Variance
vector
Weight Initialization
XGBoost
Enter to select
to navigate
ESC to close

🚀 Welcome to my Brain Dump! 🧠✨

⚠️ ‼️ N.B. These are very unorganized and messy notes ‼️ ⚠️

--------------------------------------------------------------------------------


IF YOU ARE SOMEONE WHO LIKES ORGANIZED WRITING THAN JUST SMALL NOTES, THEN HERE
IS MY BLOG

SUBSCRIBE TO GET NOTIFIED WHEN I POST NEW BLOGS



--------------------------------------------------------------------------------


FEATURED NOTES:

 1. interview
 2. DS & Algo Interview
 3. Questions to ask in a Interview?
 4. Behavioral Interview


ALL TOPICS

activation-function algorithm deep-learning interview loss-in-ml
machine-learning math nlp paper probability statistics vision

Connected Pages
Depth

1



On this page
 1. If you are someone who likes organized writing than just small notes, then
    here is my blog
    1. Subscribe to get notified when I post new blogs
 2. Featured Notes:
 3. All Topics

Pages mentioning this page
No other pages mentions this page