•1 min read•from Data Science
Standardization vs Log transform ?
Our take
Understanding the distinction between standardization and log transformation is essential for effective data analysis. Standardization adjusts the scale of your features while preserving their original distribution, making it useful when you want to compare variables with different units. In contrast, log transformation modifies the feature distribution, often normalizing skewed data for better modeling. Whether to use one or both techniques depends on your data's characteristics and the specific analysis goals.
I have been trying to understand the use cases of both of these and I am really confused.
I know log transform fixes the features and makes their distribution normal and standardization on the other hand only fixes the scale of the feature by keeping the distribution the same.
Are these things which I use one after the other ? Or just simply use one depending on the case (which I also don't understand when) ?
[link] [comments]
Read on the original site
Open the publisher's page for the full experience
Tagged with
#rows.com#Standardization#Log transform#distribution#features#use cases#normal distribution#feature scaling#data transformation#data preprocessing#data science#scale#depends on case#normalizing features#model performance#distribution same#comparison#mathematical transformation#fixes#analytics