Skip to content

Commit

Permalink
obsidian 24-10-10 12:02:40
Browse files Browse the repository at this point in the history
Affected files:
1.2.Regression.with.Multiple.Input.Variables.md
  • Loading branch information
xrxfxt committed Oct 10, 2024
1 parent 97beb0f commit db18108
Showing 1 changed file with 24 additions and 27 deletions.
51 changes: 24 additions & 27 deletions 1.2.Regression.with.Multiple.Input.Variables.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,10 @@
### C1_W2: Regression with Multiple Input Variables
# C1_W2: Regression with Multiple Input Variables

This week, you'll extend linear regression to handle multiple input features.
You'll also learn some methods for improving your model's training and performance, such as _vectorization_, _feature scaling_, _feature engineering_ and _polynomial regression_.
At the end of the week, you'll get to practice
implementing linear regression in code.
This week, you'll extend linear regression to handle multiple input features. You'll also learn some methods for improving your model's training and performance, such as _vectorization_, _feature scaling_, _feature engineering_ and _polynomial regression_. At the end of the week, you'll get to practice implementing linear regression in code.

#### C1_W2_M1 Multiple Linear Regression
## C1_W2_M1 Multiple Linear Regression

##### C1_W2_M1_1 Multiple features
### C1_W2_M1_1 Multiple features

![](/img/1.2.1.1.multiple.features.png)
-$\vec{x}^{(i)}$= __vector__ of 4 parameters for$i^{th}$row
Expand All @@ -27,13 +24,13 @@ implementing linear regression in code.
- this is __multiple linear regression__
- __Not__ _multivariate regression_

###### Quiz
#### Quiz

In the training set below (see slide: C1_W2_M1_1 Multiple features), what is$x_{1}^{(4)} $?

<details><summary>Ans</summary>852</details>

##### C1_W2_M1_2 Vectorization part 1
### C1_W2_M1_2 Vectorization part 1

Learning to write __vectorized code__ allows you to take advantage of modern
numberical linear algebra libraries, as well as maybe GPU hardware.
Expand All @@ -46,7 +43,7 @@ numberical linear algebra libraries, as well as maybe GPU hardware.
- Vectorization has 2 benefits: _concise and efficient_
- `np.dot` can use parallel hardware

##### C1_W2_M1_3 Vectorization part 2
### C1_W2_M1_3 Vectorization part 2

How does vectorized algorithm works...

Expand All @@ -58,14 +55,14 @@ How does vectorized algorithm works...

![](/img/i1.2.1.3.gradient.descent.png)

##### C1_W2_Lab01: Python Numpy Vectorization
### C1_W2_Lab01: Python Numpy Vectorization

- [Coursera](https://www.coursera.org/learn/machine-learning/ungradedLab/zadmO/optional-lab-python-numpy-and-vectorization/lab#?path=%2Fnotebooks%2FC1_W2_Lab01_Python_Numpy_Vectorization_Soln.ipynb)
- [Local](/code/C1_W2_Lab01_Python_Numpy_Vectorization_Soln.ipynb)
-$a \cdot b$returns a scalar
- e.g.$[1, 2, 3, 4] \cdot [-1, 4, 3, 2] = 24 $

##### C1_W2_M1_4 Gradient descent for multiple linear regression
### C1_W2_M1_4 Gradient descent for multiple linear regression

![](/img/1.2.1.4.gradient.descent.png)

Expand All @@ -76,12 +73,12 @@ How does vectorized algorithm works...
![](/img/1.2.1.4.normal.equation.png)
- __Normal Equation__

##### C1_W2_Lab02: Muliple linear regression
### C1_W2_Lab02: Muliple linear regression

- [Optional Lab: Multiple linear regression | Coursera](https://www.coursera.org/learn/machine-learning/ungradedLab/7GEJh/optional-lab-multiple-linear-regression/lab)
- [Local](/code/C1_W2_Lab02_Multiple_Variable_Soln.ipynb)

#### Quiz: Multiple linear regression
## Quiz: Multiple linear regression

1. In the training set below, what is$x_4^{(3)} $?

Expand All @@ -104,9 +101,9 @@ How does vectorized algorithm works...

<details><summary>Ans</summary>30, 4, F</details>

### C1_W2_M2 Gradient Descent in Practice
# C1_W2_M2 Gradient Descent in Practice

#### C1_W2_M2_01 Feature scaling part 1
## C1_W2_M2_01 Feature scaling part 1

![](/img/1.2.2.01.values.png)
- Use __Feature Scaling__ to enable gradient descent to run faster
Expand All @@ -122,7 +119,7 @@ How does vectorized algorithm works...

:bulb: We can __speed up gradient descent by scaling our features__

#### C1_W2_M2_02 Feature scaling part 2
## C1_W2_M2_02 Feature scaling part 2

![](/img/1.2.2.02.scale.png)
- scale by dividing$x_i^{(j)} / \max_x $
Expand All @@ -138,29 +135,29 @@ How does vectorized algorithm works...
- but the range is ok if it's relatively close
- rescale if range is too large or too small

##### Quiz:
### Quiz:

Which of the following is a valid step used during feature scaling? (see bedrooms vs size scatterplot)
- [ ] Multiply each value by the maximum value for that feature
- [ ] Divide each value by the maximum value for that feature

<details><summary>Ans</summary>2</details>

#### C1_W2_M2_03 Checking gradient descent for convergence
## C1_W2_M2_03 Checking gradient descent for convergence

![](/img/1.2.2.03.alpha.png)
- We can choose$\alpha $

![](/img/)
- Want to minimize _cost function_ $\min\limits_{\vec{w}, b} J(\vec{w}, b)$

#### C1_W2_M2_04 Choosing the learning rate
#### C1_W2_M2_05 Optional Lab: Feature scaling and learning rate
#### C1_W2_M2_06 Feature engineering
#### C1_W2_M2_07 Polynomial regression
#### C1_W2_M2_08 Optional lab: Feature engineering and Polynomial regression
#### C1_W2_M2_09 Optional lab: Linear regression with scikit-learn
#### C1_W2_M2_10 Practice quiz: Gradient descent in practice
#### C1_W2_M2_11 Week 2 practice lab: Linear regression
## C1_W2_M2_04 Choosing the learning rate
## C1_W2_M2_05 Optional Lab: Feature scaling and learning rate
## C1_W2_M2_06 Feature engineering
## C1_W2_M2_07 Polynomial regression
## C1_W2_M2_08 Optional lab: Feature engineering and Polynomial regression
## C1_W2_M2_09 Optional lab: Linear regression with scikit-learn
## C1_W2_M2_10 Practice quiz: Gradient descent in practice
## C1_W2_M2_11 Week 2 practice lab: Linear regression
![](Screenshot%202024-10-09%20180220.png)
![](Screenshot%202024-10-09%20180317%201.png)

0 comments on commit db18108

Please sign in to comment.