License: Creative Commons Attribution 4.0 International license (CC BY 4.0)
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.GIScience.2023.58
URN: urn:nbn:de:0030-drops-189536
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2023/18953/
Go to the corresponding LIPIcs Volume Portal


Ozkan, Mustafa Can ; Cheng, Tao

Finding Feasible Routes with Reinforcement Learning Using Macro-Level Traffic Measurements (Short Paper)

pdf-format:
LIPIcs-GIScience-2023-58.pdf (0.7 MB)


Abstract

The quest for identifying feasible routes holds immense significance in the realm of transportation, spanning a diverse range of applications, from logistics and emergency systems to taxis and public transport services. This research area offers multifaceted benefits, including optimising traffic management, maximising traffic flow, and reducing carbon emissions and fuel consumption. Extensive studies have been conducted to address this critical issue, with a primary focus on finding the shortest paths, while some of them incorporate various traffic conditions such as waiting times at traffic lights and traffic speeds on road segments. In this study, we direct our attention towards historical data sets that encapsulate individuals' route preferences, assuming they encompass all traffic conditions, real-time decisions and topological features. We acknowledge that the prevailing preferences during the recorded period serve as a guide for feasible routes. The study’s noteworthy contribution lies in our departure from analysing individual preferences and trajectory information, instead focusing solely on macro-level measurements of each road segment, such as traffic flow or traffic speed. These types of macro-level measurements are easier to collect compared to individual data sets. We propose an algorithm based on Q-learning, employing traffic measurements within a road network as positive attractive rewards for an agent. In short, observations from macro-level decisions will help us to determine optimal routes between any two points. Preliminary results demonstrate the agent’s ability to accurately identify the most feasible routes within a short training period.

BibTeX - Entry

@InProceedings{ozkan_et_al:LIPIcs.GIScience.2023.58,
  author =	{Ozkan, Mustafa Can and Cheng, Tao},
  title =	{{Finding Feasible Routes with Reinforcement Learning Using Macro-Level Traffic Measurements}},
  booktitle =	{12th International Conference on Geographic Information Science (GIScience 2023)},
  pages =	{58:1--58:6},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-288-4},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{277},
  editor =	{Beecham, Roger and Long, Jed A. and Smith, Dianna and Zhao, Qunshan and Wise, Sarah},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/opus/volltexte/2023/18953},
  URN =		{urn:nbn:de:0030-drops-189536},
  doi =		{10.4230/LIPIcs.GIScience.2023.58},
  annote =	{Keywords: routing, reinforcement learning, q-learning, data mining, macro-level patterns}
}

Keywords: routing, reinforcement learning, q-learning, data mining, macro-level patterns
Collection: 12th International Conference on Geographic Information Science (GIScience 2023)
Issue Date: 2023
Date of publication: 07.09.2023


DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI