Multi-task learning

In multi-task learning, the higher the number of environments the agent has been trained on, the more diversity and the better performance the agent will achieve on the target environment. The multiple source tasks can either be learned by one or multiple agents. If only one agent has been trained, then its deployment on the target task is easy. Otherwise, if multiple agents learned separate tasks, then the resulting policies can either be used as an ensemble, and the predictions on the target task averaged, or an intermediate step called distillation is employed to merge the policies into one. Specifically, the process of distillation compresses the knowledge of an ensemble of models into a single one that is easier to deploy and that infers faster.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset