Challenging the memory of RL agents

Reinforcement learning agents are usually trained to maximize their rewards by taking actions in an environment following a Markov Decision Process (MDP). A Markov Decision Process is simply a model that defines the state of an environment by its current state, actions, and rewards, including also its possible future states. The key point is that agents know information from the present and can approximately predict … Continue reading Challenging the memory of RL agents

Exploring Transformer Model for Reinforcement Learning

MLP is widely used in RL to implement a learnable agent in a certain environment trained according to a specific algorithm. Recent works in NLP have already proved that Transformer can replace and outperform MLP in most tasks leading to expanding its utilization in areas outside of NLP such as Computer Vision. However, in RL the Transformer architecture is still not widely adopted, and agents … Continue reading Exploring Transformer Model for Reinforcement Learning

Speed benchmark einsum vs matmul in XL-Attention

The original Transformer can only attend to a fixed and limited segment of the input to compute its attention. The major drawback of this architecture is that no information can flow across separated segments which prevents the Transformer to model long-term dependencies. Transformer-XL is an enhancement to the vanilla Transformer which enables the latter to store the most recent hidden states in a fixed-length memory … Continue reading Speed benchmark einsum vs matmul in XL-Attention

How Genify used a Transformer-based model to build a recommender system that outperforms industry benchmarks

The rapid ascension of AI, and more recently of deep learning, comported a succession of many breakthroughs in the field of computer science. These have had a profound impact on both the academic and the business world. In particular, modern deep learning techniques applied to the pre-existing concept of recommender systems have given birth to a new, superior class of neural recommender systems, which are … Continue reading How Genify used a Transformer-based model to build a recommender system that outperforms industry benchmarks

Genify’s experience testing Amazon Personalize: learnings and limitations

Challenges of machine learning Machine learning is a complex field that borrows elements from different areas such as computer science, algebra and statistics. Hence, it is not immediate, even for experts in the field, to build strong machine learning models to solve predefined task. Furthermore, those models should also be optimized with a time-consuming and repetitive hyper-parameters search in order to find the best set … Continue reading Genify’s experience testing Amazon Personalize: learnings and limitations