# Guidance¶

QlibRL can help users quickly get started and conveniently implement quantitative strategies based on reinforcement learning(RL) algorithms. For different user groups, we recommend the following guidance to use QlibRL.

## Beginners to Reinforcement Learning Algorithms¶

Whether you are a quantitative researcher who wants to understand what RL can do in trading or a learner who wants to get started with RL algorithms in trading scenarios, if you have limited knowledge of RL and want to shield various detailed settings to quickly get started with RL algorithms, we recommend the following sequence to learn qlibrl:
• Learn the fundamentals of RL in part1.
• Understand the trading scenarios where RL methods can be applied in part2.
• Run the examples in part3 to solve trading problems using RL.
• If you want to further explore QlibRL and make some customizations, you need to first understand the framework of QlibRL in part4 and rewrite specific components according to your needs.

## Reinforcement Learning Algorithm Researcher¶

If you are already familiar with existing RL algorithms and dedicated to researching RL algorithms but lack domain knowledge in the financial field, and you want to validate the effectiveness of your algorithms in financial trading scenarios, we recommend the following steps to get started with QlibRL:
• Understand the trading scenarios where RL methods can be applied in part2.
• Choose an RL application scenario (currently, QlibRL has implemented two scenario examples: order execution and algorithmic trading). Run the example in part3 to get it working.
• Modify the policy part to incorporate your own RL algorithm.

## Quantitative Researcher¶

If you have a certain level of financial domain knowledge and coding skills, and you want to explore the application of RL algorithms in the investment field, we recommend the following steps to explore QlibRL:
• Learn the fundamentals of RL in part1.
• Understand the trading scenarios where RL methods can be applied in part2.
• Run the examples in part3 to solve trading problems using RL.
• Understand the framework of QlibRL in part4.
• Choose a suitable RL algorithm based on the characteristics of the problem you want to solve (currently, QlibRL supports PPO and DQN algorithms based on tianshou).
• Design the MDP (Markov Decision Process) process based on market trading rules and the problem you want to solve. Refer to the example in order execution and make corresponding modifications to the following modules: State, Metrics, ActionInterpreter, StateInterpreter, Reward, Observation, Simulator.