License: Creative Commons Attribution 3.0 Unported license (CC BY 3.0)
When quoting this document, please refer to the following
DOI: 10.4230/DagMan.3.1.1
URN: urn:nbn:de:0030-drops-43569
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2013/4356/
Go back to Dagstuhl Manifestos


Joseph, Anthony D. ; Laskov, Pavel ; Roli, Fabio ; Tygar, J. Doug ; Nelson, Blaine
Weitere Beteiligte (Hrsg. etc.): Anthony D. Joseph and Pavel Laskov and Fabio Roli and J. Doug Tygar and Blaine Nelson

Machine Learning Methods for Computer Security (Dagstuhl Perspectives Workshop 12371)

pdf-format:
dagman-v003-i001-p001-12371.pdf (1.0 MB)


Abstract

The study of learning in adversarial environments is an emerging discipline at the juncture between machine learning and computer security. The interest in learning-based methods for security- and system-design applications comes from the high degree of complexity of phenomena underlying the security and reliability of computer systems. As it becomes increasingly difficult to reach the desired properties solely using statically designed mechanisms, learning methods are being used more and more to obtain a better understanding of various data collected from these complex systems. However, learning approaches can be evaded by adversaries, who change their behavior in response to the learning methods. To-date, there has been limited research into learning techniques that are resilient to attacks with provable robustness guarantees

The Perspectives Workshop, "Machine Learning Methods for Computer Security" was convened to bring together interested researchers from both the computer security and machine learning communities to discuss techniques, challenges, and future research directions for secure learning and learning-based security applications. As a result of the twenty-two invited presentations, workgroup sessions and informal discussion, several priority areas of research were
identified. The open problems identified in the field ranged from traditional applications of machine learning in security, such as attack detection and analysis of malicious software, to methodological issues related to secure learning, especially the development of new formal approaches with provable security guarantees. Finally a number of other potential applications were
pinpointed outside of the traditional scope of computer security in which
security issues may also arise in connection with data-driven methods. Examples of such applications are social media spam, plagiarism detection, authorship identification, copyright enforcement, computer vision (particularly in the context of biometrics), and sentiment analysis.

BibTeX - Entry

@Article{joseph_et_al:DM:2013:4356,
  author =	{Anthony D. Joseph and Pavel Laskov and Fabio Roli and J. Doug Tygar and Blaine Nelson},
  title =	{{Machine Learning Methods for Computer Security (Dagstuhl Perspectives Workshop 12371)}},
  pages =	{1--30},
  journal =	{Dagstuhl Manifestos},
  ISSN =	{2193-2433},
  year =	{2013},
  volume =	{3},
  number =	{1},
  editor =	{Anthony D. Joseph and Pavel Laskov and Fabio Roli and  J. Doug Tygar and Blaine Nelson},
  publisher =	{Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{http://drops.dagstuhl.de/opus/volltexte/2013/4356},
  URN =		{urn:nbn:de:0030-drops-43569},
  doi =		{10.4230/DagMan.3.1.1},
  annote =	{Keywords: Adversarial Learning, Computer Security, Robust Statistical Learning, Online Learning with Experts, Game Theory, Learning Theory}
}

Keywords: Adversarial Learning, Computer Security, Robust Statistical Learning, Online Learning with Experts, Game Theory, Learning Theory
Collection: Dagstuhl Manifestos, Volume 3, Issue 1
Issue Date: 2013
Date of publication: 29.11.2013


DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI