License: Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license (CC BY-NC-ND 3.0)
When quoting this document, please refer to the following
DOI: 10.4230/DagRep.2.9.109
URN: urn:nbn:de:0030-drops-37908
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2013/3790/
Go back to Dagstuhl Reports


Joseph, Anthony D. ; Laskov, Pavel ; Roli, Fabio ; Tygar, J. Doug ; Nelson, Blaine
Weitere Beteiligte (Hrsg. etc.): Anthony D. Joseph and Pavel Laskov and Fabio Roli and J. Doug Tygar and Blaine Nelson

Machine Learning Methods for Computer Security (Dagstuhl Perspectives Workshop 12371)

pdf-format:
dagrep_v002_i009_p109_s12371.pdf (0.7 MB)


Abstract

The study of learning in adversarial environments is an emerging discipline at the juncture between machine learning and computer security that raises new questions within both fields.

The interest in learning-based methods for security and system design applications comes from the high degree of complexity of phenomena underlying the security and reliability of computer systems. As it becomes increasingly difficult to reach the desired properties by design alone, learning methods are being used to obtain a better understanding of various data collected from these complex systems.

However, learning approaches can be co-opted or evaded by adversaries, who change to counter them. To-date, there has been limited research into learning techniques that are resilient to attacks with provable robustness guarantees making the task of designing secure learning-based systems a lucrative open
research area with many challenges.

The Perspectives Workshop, ``Machine Learning Methods for Computer Security'' was convened to bring together interested researchers from both the computer security and machine learning communities to discuss techniques, challenges, and future research directions for secure learning and learning-based security applications.

This workshop featured twenty-two invited talks from leading researchers within the secure learning community covering topics in adversarial learning, game-theoretic learning, collective classification, privacy-preserving learning, security evaluation metrics, digital forensics, authorship identification, adversarial advertisement detection, learning for offensive security, and data sanitization. The workshop also featured workgroup sessions
organized into three topic: machine learning for computer security, secure learning, and future applications of secure learning.

BibTeX - Entry

@Article{joseph_et_al:DR:2013:3790,
  author =	{Anthony D. Joseph and Pavel Laskov and Fabio Roli and J. Doug Tygar and Blaine Nelson},
  title =	{{Machine Learning Methods for Computer Security (Dagstuhl Perspectives Workshop 12371)}},
  pages =	{109--130},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2013},
  volume =	{2},
  number =	{9},
  editor =	{Anthony D. Joseph and Pavel Laskov and Fabio Roli and J. Doug Tygar and Blaine Nelson},
  publisher =	{Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{http://drops.dagstuhl.de/opus/volltexte/2013/3790},
  URN =		{urn:nbn:de:0030-drops-37908},
  doi =		{10.4230/DagRep.2.9.109},
  annote =	{Keywords: Adversarial Learning, Computer Security, Robust Statistical Learning, Online Learning with Experts, Game Theory, Learning Theory}
}

Keywords: Adversarial Learning, Computer Security, Robust Statistical Learning, Online Learning with Experts, Game Theory, Learning Theory
Collection: Dagstuhl Reports, Volume 2, Issue 9
Issue Date: 2013
Date of publication: 07.02.2013


DROPS-Home | Fulltext Search | Imprint | Privacy Published by LZI