Machine Learning 面试题与答案
问题 11. Explain the difference between batch gradient descent and stochastic gradient descent.
Batch gradient descent updates the model parameters using the entire dataset, while stochastic gradient descent updates the parameters using one randomly selected data point at a time. Mini-batch gradient descent is a compromise, using a small subset of the data for each update.
问题 12. What is the purpose of regularization in machine learning?
Regularization is used to prevent overfitting in machine learning models by adding a penalty term to the cost function. It discourages the model from fitting the training data too closely and encourages generalization to new, unseen data.
问题 13. Explain the K-nearest neighbors (KNN) algorithm.
KNN is a simple, instance-based learning algorithm used for classification and regression. It classifies a new data point based on the majority class of its k-nearest neighbors in the feature space.
问题 14. What is the difference between L1 and L2 regularization?
L1 regularization adds the absolute values of the coefficients to the cost function, encouraging sparsity, while L2 regularization adds the squared values, penalizing large coefficients. L1 tends to produce sparse models, while L2 prevents extreme values in the coefficients.
问题 15. What is the ROC curve, and what does it represent?
The Receiver Operating Characteristic (ROC) curve is a graphical representation of a binary classification model's performance across different thresholds. It plots the true positive rate against the false positive rate, helping to assess the trade-off between sensitivity and specificity.
用户评价最有帮助的内容:
- Explain the concept of feature engineering.
- What is the purpose of regularization in machine learning?
- Explain the term 'hyperparameter' in the context of machine learning.
- What is the purpose of the activation function in a neural network?
- Explain the term 'precision' in the context of classification.