본문 바로가기
인공지능

동경하는 기계학습

by YJHTPII 2020. 10. 21.
반응형

초안

동경하는 기계학습

인공지능 엔지니어를 위한 기술 전략,

심층학습의 시대에

 

앤드류 옹


deeplearning.ai

 

동경하는 기계학습은 deeplearning.ai 프로젝트입니다.


목차(초안)

1. Why Machine Learning Strategy

 Your learning algorithm's accuracy is not yet good enough.

 Your team has a lot of ideas, such as:

- Get more data

- Collect a more diverse training set

- Train the algorithm longer

- Try a bigger neural network

- Try a smaller neural network

- Try adding regularization(such as L2 reg.)

- Change the neural network architecture

 This book will tell you how.

2. How to use this book to help your team

 Your team will have a deep understanding of how to set technical direction for a machine learning project.

3. Prerequisites and Notation

 I will frequently refer to neural networks. You'll only need a basic understanding of what NNs are to follow this text.

 For concepts mentioned here, watch the videos in the Machine Learning course on Coursera at http://ml-class.org

4. Scale drives machine learning progress

 Two of the biggest drivers of recent progress have been:

 - Data Availability: Digital activities generate huge amounts of data that we can feed to our learning algorithms.

 - Computational scale

 Evan as you accumulate more data, older learning curve (logistic regression) "flattens out", and the algorithm stops imroving even as you give it more data.

 One of the reliable ways to improve an algorithm's performance today is to train a bigger network and get more data.

5. Your development and test sets

-Training set: Which you run your learning alogorithm on.

- Dev(development) set: Which you use to tune parameters, select features, and make other decisions regarding the learning algorithm. Sometimes also called the hold-out cross validation set.

-Test set: which you use to evaluate the performance of the algorithm, but not to make any decisions regarding what learning algorithm or parameters to use.

The purpose of the dev and test sets are to direct your team toward the most importnat changes to make to the machine learning system.

Try to pick test examples that reflect what you ultimately want to perform well on, rather than whatever data you happen to have for training.

6. Your dev and test sets should come from the same distribution

It is an important research problem to develop learning algorithms that are trained on one distribution and generalize well to another. But if your goal is to make progress on a specific machine learning application rather than make research progress, I recommend trying to choose dev and test sets that are drawn from the same distribution. This will make your team more efficient.

7. How large do the dev/test sets need to be?

Dev sets with sizes from 1,000 to 10,000 examples are common. With 10,000 examples, you will have a good chance of detecting an improvement of 0.1%. There is no need to have excessively large dev/test beyond what is needed to evaluate the performance of your algorithms.

8. Establish a single-number evaluation metric for your team to optimize

Classification accuracy is an example of a single-number evalutaion metric. In contrast, Precision and Recall is not a single-number evaluation metric. It gives two numbers for assessing your classifier. F1 score is the "harmonic mean" between Precision and Recall, and is calculated as 2/((1/Precision)+(1/Recall)).

9. Optimizaing and satisficing metrics

Suppose you are building a hardware device that uses a microphone to listen for the user saying a particular "wakeword", that then causes the system to wake up. You care about both the false positive rate-the frequency with which the system wakes up even when no one said the wakeword-as well as the false negative rate-how often it fails to wake up when someone says the wakeword. One reasonable goal for the performance of this system is to minimize the false negative rate(optimizing metric), subject to there being no more than one false positive every 24 hours of operation (satisficing metric). FPR: no one says the wakeword -> wakes up  /  FNR: someone says the wakeword -> wake up fails.

10. Having a dev set and metric speeds up iterations

Having a dev set and metric allows you to very quickly detect which ideas are successfully giving you small (or large) improvements, and therefore lets you quickly decide what ideas to keep refining, and which ones to discard.

11. When to change dev/test sets and metrics

There are three main possible causes of the dev set/metric incorrectl rating 

- The actual distribution you need to do well on is diffenrent from the dev/test sets.

- You have overfit to the dev set.

- The metric is measuring something other than what the project needs to optimize.

12. Takeaways: Setting up development and test sets

If your dev set and metric are no longer pointing our team in the right direction, quickly change them

13. Build your first system quickly, then iterate

Build and train a basic ssytem quickly-perhaps in just a few days.

14. Error analysis: Look at dev set examples to evaluate ideas

- Gather a sample of 100 dev set examples that your system misclassified.I.e., examples that your system made an error on.

- Look at these examples manually, and count what fraction of them are dog images. The process of looking at misclassified examples is called error analysis.
15. Evaluating multiple ideas in parallel during error analysis

- Error analysis is an iterative process. Don't worry if you start off with no categories in mind. After looking at a couple of images, you might come up with a few ideas for error categories.                                                      
16. Cleaning up mislabeled dev and test set examples

-  If you decide to improve the label quality, consider double-checking both the labels of examples that your system misclassified as well as labels of examples it correctly classified. It is possible that both the original label and your learning algorithm were wrong on an example.

 

 

반응형

댓글