In lesson 5 we put all the pieces of training together to understand exactly what is going on when we talk about *back propagation*. We'll use this knowledge to create and train a simple neural network from scratch.

We'll also see how we can look inside the weights of an embedding layer, to find out what our model has learned about our categorical variables. This will let us get some insights into which movies we should probably avoid at all costs…

Although embeddings are most widely known in the context of word embeddings for NLP, they are at least as important for categorical variables in general, such as for tabular data or collaborative filtering. They can even be used with non-neural models with great success.

- Lesson notes - thanks to @PoonamV
- Detailed lesson notes - thanks to @hiromi
- Notebooks:
- Excel spreadsheets:
- collab_filter.xlsx;
::...

**免责声明:**

当前网页内容, 由 大妈 ZoomQuiet 使用工具: ScrapBook :: Firefox Extension 人工从互联网中收集并分享;

内容版权归原作者所有;

本人对内容的有效性/合法性不承担任何强制性责任.

若有不妥, 欢迎评注提醒:

或是邮件反馈可也:

*askdama[AT]googlegroups.com*

订阅 substack 体验古早写作:

点击注册~> 获得**100$**体验券:

关注公众号, 持续获得相关各种嗯哼:

#### 自怼圈/年度番新

关于 ~ DebugUself with DAMA ;-)

粤ICP备18025058号-1

公安备案号: 44049002000656 ...::

- collab_filter.xlsx;
::...