HUANG1993 commited on
Commit
c823141
1 Parent(s): 5d07f3b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -13,13 +13,11 @@ tags:
13
 
14
  # 🤠GreedRL
15
 
16
- ## Introduction
17
 
18
- ***Combinatorial Optimization Problems (COPs)*** has long been an active field of research. Generally speaking, there exists two main approaches for solving COPs, each of them having pros and cons. On one hand, the *exact algorithms* can find the optimal solution, but they may be prohibitive for solving large instances because of the exponential increase of the execution time.
19
- On the other hand, *heuristic algorithms* can compute solutions efficiently, but are not able to guarantee the optimality of solutions.
20
 
21
- In the realistic business scenarios, COPs are usually large-scale (>=1000 nodes), which have very strict requirements for the execution time and performance of solutions. To better solve these problems, we
22
- propose a generic and complete solver, named **🤠GreedRL**, based on **Deep Reinforcement Learning (DRL)**, which achieves improved speed and performance of solutions than *heuristic algorithms* .
23
 
24
  ## 🏆Award
25
 
@@ -30,15 +28,17 @@ tags:
30
 
31
  * **GENERAL**
32
 
33
- 🤠GreedRL makes **a high level of abstraction for COPs**, which can solve various types of problems, such as Vehicle Routing Problems(VRPs), Batching, Scheduling and Online Assignment problems. At the same time, for the VRPs, it also supports variants of VRPs with different constraints, such as Time-Window, Pickup-Delivery, Split-Delivery, Multi-Vehicles, etc.
34
 
35
  * **HIGH-PERFORMANCE**
36
 
37
- 🤠GreedRL have improved the DRL environment (Env) simulation speed by **CUDA and C++ implementations**. At the same time, we have also implemented some **Operators** to replace the native operators of PyTorch, like *Masked Matrix Multiplication* and *Masked Additive Attention*, to achive the ultimate computing performance.
38
 
39
  * **USER-FRIENDLY**
40
 
41
- 🤠GreedRL have **warped commonly used modules**, such as Neural Network (NN) components, RL training algorithms and COPs constraints implementations, which makes it easy to use.
 
 
42
 
43
  ## Architecture
44
  ![](./images/GREEDRL-Framwork_en.png)
 
13
 
14
  # 🤠GreedRL
15
 
16
+ ## Overview
17
 
18
+ - 🤠GreedRL is a fast and general framework for **Combinatorial Optimization Problems (COPs)**, based on **Deep Reinforcement Learning (DRL)**.
 
19
 
20
+ - 🤠GreedRL achieves **1200 times faster and 3% improved performance** than [Google OR-Tools](https://developers.google.com/optimization) for large-scale (>=1000 nodes) CVRPs.
 
21
 
22
  ## 🏆Award
23
 
 
28
 
29
  * **GENERAL**
30
 
31
+ 🤠GreedRL makes **a high level of abstraction for COPs**, which can solve various types of problems, such as TSP, CVRP, VRPTW, PDPTW, SDVRP, DPDP, Order Batching, etc.
32
 
33
  * **HIGH-PERFORMANCE**
34
 
35
+ 🤠GreedRL have improved the DRL environment (Env) simulation speed by **CUDA and C++ implementations**.
36
 
37
  * **USER-FRIENDLY**
38
 
39
+ 🤠GreedRL framework provides **user-friendly ability for COPs modeling**, where users only need to declare constraints, objectives and variables of COPs.
40
+
41
+ For more detailed examples, please refer to [COPs Modeling examples].
42
 
43
  ## Architecture
44
  ![](./images/GREEDRL-Framwork_en.png)